Skip navigation
Help

Ray Kurzweil

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.

Esther Schindler writes "If you ever needed evidence that Isaac Asimov was a genius at extrapolating future technology from limited data, you'll enjoy this 1964 article in which he predicts what we'll see at the 2014 world's fair. For instance: "Robots will neither be common nor very good in 2014, but they will be in existence. The I.B.M. exhibit at the present fair has no robots but it is dedicated to computers, which are shown in all their amazing complexity, notably in the task of translating Russian into English. If machines are that smart today, what may not be in the works 50 years hence? It will be such computers, much miniaturized, that will serve as the "brains" of robots. In fact, the I.B.M. building at the 2014 World's Fair may have, as one of its prime exhibits, a robot housemaid*large, clumsy, slow- moving but capable of general picking-up, arranging, cleaning and manipulation of various appliances. It will undoubtedly amuse the fairgoers to scatter debris over the floor in order to see the robot lumberingly remove it and classify it into 'throw away' and 'set aside.' (Robots for gardening work will also have made their appearance.)" It's really fun (and sometimes sigh-inducing) to see where he was accurate and where he wasn't. And, of course, the whole notion that we'd have a world's fair is among the inaccurate predictions."

0
Your rating: None

Trent-sxswi-13-04_large

Flipping through the pocket programming guide for South By Southwest 2013 feels a little bit like reading through an entire year of one of those Joke-A-Day or Far Side calendars you had on your desk when you were a kid in one sitting: you are really not supposed to take all of this in in just one day.

Getting Started With Angel Investing
#catvidfest: Is This The End Of Art?
What Can We Learn From The Unabomber?
Extreme GPS: Limits of Security & Precision
Latinos y Mobile: A Silver Bullet?
The Comfy Chair! Are We Sitting Too Much?

Some sound like they are for babies, others sound like they are for EMBA students, most sound like they are for bloggers. And then there was

Female Orgasm: The Regenerative Human Technology

Continue reading…

0
Your rating: None


How To Create A Mind: Ray Kurzweil at TEDxSiliconAlley

In the spirit of ideas worth spreading, TEDx is a program of local, self-organized events that bring people together to share a TED-like experience. At a TED...
From:
TEDxTalks
Views:
116

11
ratings
Time:
21:41
More in
People & Blogs

0
Your rating: None

I just got done reading Ray Kurzweil's How to Create a Mind, his latest on how machines will soon (2030ish) pass the Turing test, and then basically become like robots envisaged in the 60's, with distinct personalities, acting as faithful butlers to our various needs.

And then, today over on The Edge, Bruce Sterling is saying that's all a pipe dream, computers are still pretty dumb.  As someone who works with computer algorithms all day, I too am rather unimpressed by a computer's intelligence.

He also notes that IBM's Watson won a Jeapardy! contest by reading all of Wikipedia, a feat clearly beyond any human mind. Further, as Kurzweil notes, many humans are pretty simple, and so it's not inconceivable a computer can replicate your average human, if only average is pretty predictable. Sirri is already funnier than perhaps 10% of humans.

But I doubt they will ever approximate a human, because human's have what machines can't have, which is emotions, and emotions are necessary for prioritizing, and a good prioritization is the essence of wisdom.  One can be a genius, but if you are focused solely on one thing you are autistic, and such people aren't called idiot-savants for nothing.

Just as objectivity is not the result of objective scientist, but an emergent result of the scientific community, consciousness may not be the result of a thoughtful individual, but a byproduct of a striving individual enmeshed in a community of other minds, each wishing to understand the other minds better so that they can rise above them. I see how you could program this drive into a computer, a deep parameter that gives points for how many times others call their app, perhaps.

Kurzwiel notes that among species of vole rats, those that have monogamous bonds have oxytocin and vasopressin receptors, and those that opt for one-night stands do not. Hard wired emotions dictate behavior.  But it's one thing to program a desire for company, an aversion to loneliness, another to desire a truly independent will.

Proto humans presumably had the consciousness of dogs, so something in our striving created consciousness incidentally. Schopenhauer said "we don't want a thing because we have found reasons for it, we find reasons for it because we want it." The intellect may at times to lead the will, but only as a guide leads the master. He saw the will to power, and fear of death, as being the essence of humanity.  Nietzsche noted similarly that "Happiness is the feeling that power increases."  I suppose one could try to put this into a program as a deep preference, but I'm not sure how, in that, what power to a computer could be analogous to power wielded by humans?

Kierkegaard thought the crux of human consciousness was anxiety, worrying about doing the right thing.  That is, consciousness is not merely having perceptions and thoughts, even self-referential thoughts, but doubt, anxiety about one's priorities and how well one is mastering them. We all have multiple priorities--self preservation, sensual pleasure, social status, meaning--and the higher we go the more doubtful we are about them. Having no doubt, like having no worries, isn't bliss, it's the end of consciousness.  That's what always bothers me about people who suggest we search for flow, because like good music or wine, it's nice occasionally like any other sensual pleasure, but only occasionally in the context of a life of perceived earned success.

Consider the Angler Fish. The smaller male is born with a huge olfactory system, and once he has developed some gonads, smells around for a gigantic female. When he finds her, he bites into her skin and releases an enzyme that digests the skin of his mouth and her body, fusing the pair down to the blood-vessel level. He is then fed by, and has his waste removed by, the female's blood supply, as the male is basically turned into a parasite. However, he is a welcomed parasite, because the female needs his sperm. What happens to a welcomed parasite? Other than his gonads, his organs simply disappear, because all that remains is all that is needed. No eyes, no jaw, no brain. He has achieved his purpose, and could just chill in some Confucian calm, but instead just dissolves his brain entirely.

A computer needs pretty explicit goals because otherwise the state space of things it will do blows up, and one can end up figuratively calculating the 10^54th digit of pi--difficult to be sure, and not totally useless, but still pretty useless.  Without anxiety one could easily end up in an intellectual cul-de-sac and not care.  I don't see how a computer program with multiple goals would feel anxiety, because they don't have finite lives, so they can work continuously, forever, making it nonproblematic that one didn't achieve some goal by the time one's eggs ran out.  Our anxiety makes us satisfice, or find novel connections that do not what we originally wanted but do what's very useful nonetheless, and in the process helped increase our sense of meaning and status (often, by helping others).

Anxiety is what makes us worry we are at best maximizes an inferior local maximum, and so need to start over, and this helps us figure things out with minimal direction.  A program that does only what you tell it to do is pretty stupid compared to even stupid humans, any don't think for a second neural nets or hierarchical hidden markov models (HHMMs) can figure stuff out that isn't extremely well defined (like figuring out captchas, where Kurzweil thinks HHMMs show us something analogous to human thought).

Schopenhauer, Kierkegaard, and Nietzsche were all creative, deep thinkers about the essence of humanity, and they were all very lonely and depressed. When young they thought they were above simple romantic pair bonds, but all seemed to have deep regrets later, and I think this caused them to apply themselves more resolutely to abstract ideas (also, alas, women really like confidence in men, which leads to all sorts of interesting issues, including that their doubt hindered their ability to later find partners, and that perhaps women aren't fully conscious (beware troll!)). Humans have trade-offs, and we are always worrying if we are making the right ones, because no matter how smart you are, you can screw up a key decision and pay for it the rest of your life. We need fear, pride, shame, lust, greed and envy, in moderation, and I think you can probably get those into a computer.  But anxiety, doubt, I don't think can be programmed because logically a computer is always doing the very best it can in that's its only discretion is purely random, and so it perceives only risk and not uncertainty (per Keynes/Knight/Minsky), and thus, no doubt. 

Please follow Business Insider on Twitter and Facebook.

Join the conversation about this story »

0
Your rating: None

An anonymous reader writes "Nataly Kelly writes in the Huffington Post about Google's strategy of hiring Ray Kurzweil and how the company likely intends to use language translation to revolutionize the way we share information. From the article: 'Google Translate is not just a tool that enables people on the web to translate information. It's a strategic tool for Google itself. The implications of this are vast and go beyond mere language translation. One implication might be a technology that can translate from one generation to another. Or how about one that slows down your speech or turns up the volume for an elderly person with hearing loss? That enables a stroke victim to use the clarity of speech he had previously? That can pronounce using your favorite accent? That can convert academic jargon to local slang? It's transformative. In this system, information can walk into one checkpoint as the raucous chant of a 22-year-old American football player and walk out as the quiet whisper of a 78-year-old Albanian grandmother.'"

Share on Google+

Read more of this story at Slashdot.

0
Your rating: None

Board Room EmptyMost people and corporations are terrible at planning for the future, according to prominent futurist Dr. Peter Bishop of the University of Houston.

"The way the public — and particularly the way policy and decision makers talk about the future, is with way more certainty than they should have," Bishop explained in a phone conversation with Business Insider.

"To tell you what's going to happen is asking the wrong question," he said. "It's not what the future will be, but what it could be or what it might be. And the problem is that in the halls of decision, in the boardroom or the pentagon or something like that, That degree of uncertainty is not welcome."

Bishop says there are subtleties to future planning that can only be grasped by a futurist.

Several major corporations feel the same way, which is why Google just hired futurist Ray Kurzweil as director of engineering and Cisco employs furturist Dave Evans. Bishop himself has worked with IBM, NASA, Nestle, and more.

Says Bishop: "We're encouraging companies to spend a little bit of time thinking about the future, single digit percents. They don't do it because they don't know how to do it so they think it's a waste of time, and the reason that they don't know how is because nobody in their schools has ever taught them how to do it."

So what are the keys to thinking like a futurist?

Futurists think in terms of "multiple futures" rather than one. Not only does this increase the chances that one will have a plan for the actual future, but it also "intellectually conditions" one to adapt to change.

Futurists also see value in challenging basic assumptions.

 "What we do now is basically assume," says. Bishop. "Then we go on and make plans, and those assumptions are fine. The problem is we don't challenge those assumptions, and what we've been taught in physics class is to state your assumptions. We were never taught to say, 'and what if those assumptions are wrong? What's different about it?'"

Just look at what happened in New Orleans in 2005:

"One of the classic cases of this was the plan for responding in New Orleans to a hurricane, like Katrina in 2005. On the very first page of that plan, published by LSU under a grant from the Department of  Interior, and the very first assumption was, the levees will hold. They should say that, and they should make that assumption. The problem is that they should also have said, and what if they don't? Now that may have sent them off to go look at them, number one, that would have been a nice thing to do. And secondly ... what is the plan if the levees do not hold, because then water comes in pouring in from the lake and the river and stuff, and they had nothing about that."

Bishop says big changes are coming in our lifetime:

"There will be significant change within our tenure within any position within our lifetime for sure, that we will have to learn to live in a new world — to some extent. It's not completely new the way some futurists will say. But it will be new enough that we will be uncomfortable, we will be unprepared, and that we will have to learn new skills and new techniques in order to be successful in that future compared to how we are being successful today, or indeed how we were prepared to be successful when we were in school or training."

Now see futurist Ray Kurzweil's vision of the next 20 years >

Please follow War Room on Twitter and Facebook.

Join the conversation about this story »

0
Your rating: None

In the fall of 2011 Peter Norvig taught a class with Sebastian Thrun on artificial intelligence at Stanford attended by 175 students in situ -- and over 100,000 via an interactive webcast. He shares what he learned about teaching to a global classroom.

0
Your rating: None

The Pentagon’s new Avatar project, unveiled by Danger Room a few weeks back, sounds freaky enough: Soldiers practically inhabiting the bodies of robots, who’d act as “surrogates” for their human overlords in battle.

But according to Dmitry Itskov, a 31-year-old Russian media mogul, the U.S. military’s Avatar initiative doesn’t go nearly far enough. He’s got a massive, sci-fi-esque venture of his own that he hopes will put the Pentagon’s project to shame. Itskov’s plan: Construct robots that’ll (within 10 years, he hopes) actually store a human’s mind and keep that consciousness working. Forever.

“This project is leading down the road to immortality,” Itskov, who founded New Media Stars, a Russian company that runs several online news outlets, tells Danger Room. “A person with a perfect Avatar will be able to remain part of society. People don’t want to die.”

Itskov’s project, also called “Avatar,” actually precedes the Pentagon’s. He launched the initiative a year ago, but recently divulged more details to a group of futurists — including Ray Kurzweil — at a three-day conference, called Global Future 2045, held in Moscow.

Until now, most of the work on Itskov’s Avatar has taken place in Russia, where he claims to have hired 30 researchers — all of them paid out of his own deep pockets. Now, Itskov plans to take the mission global. “I want to collaborate with scientists from around the world,” he says. “This is a new strategy for the future; for humanity.”

So how would Itskov’s “Avatar” work? Well, he anticipates developing the program in stages. Within the next few years, Itskov plans to deploy robots that can be operated by the human mind. That’s actually not too wild a proposition: Pentagon-backed research has already demonstrated a monkey controlling a robotic arm using some nifty mind-meld tech, for example. And one study on human patients, out of Johns Hopkins, is using brain implants to control artificial limbs.

After phase one of “Avatar,” however, Itskov’s ambitions arguably eclipse even those of the Pentagon’s maddest mad scientists. In 10 years, he anticipates “transplanting” a human mind into a robotic one. After that, Itskov wants to do away with surgical procedures and instead upload the contents of the mind into its brand new, artificial robo-body. And, last but not least, within 30 years Itskov anticipates developing hologram-type bodies — instead of tangible robotic ones — that can “host” human consciousness.

“Holograms give plenty of advantages. You can walk through walls, move at the speed of light,” he says. “Remember in Star Wars, Obi-Wan’s hologram? That was pretty amazing.”

Amazing, yes. Scientifically feasible? Certainly not right now, and maybe not ever. “I understand these are some very big challenges for scientists,” Itskov acknowledges. “But I believe in something you call ‘The American Dream.’ If you put all your energy and time into something, you can make it a reality.”

Itskov, who plans to open two American offices this year, even hopes to collaborate with Darpa on the agency’s ‘Avatar’ program. And he’s keen  to talk to agency scientists about the next, more far-out stages of brain-machine interfaces that he plans to develop. “I’m sure someone at Darpa is interested in taking this further,” he says.

So far, at least, Danger Room hasn’t come across any Darpa-funded ventures to develop immortal hologram-brain interfaces. But the agency just might find a little extra blue-sky inspiration in Itskov, who likens Avatar to Darpa’s best-known innovation: The internet.

“Years ago, people didn’t believe the internet could work,” he says. “I think of Avatar in the same light. Right now, the idea is new and radical. It won’t always be that way.”

0
Your rating: None

First time accepted submitter Lyrdor writes "The Terms of Service for the Stanford Artificial Intelligence class points to how the free class this fall will be used for 'developing and evaluating the Online Course prior any commercial release of the Course' by a startup called KnowLabs. Although all of the press accounts so far have pointed to how the course would be a new example of Open Educational Resources from Stanford, the terms of service point to something else going on. On the LinkedIn page of David Stavens, Co-Founder and CEO at Know Labs, the startup is described on his profile as an 'angel funded startup to re-envision and revolutionize education using the social web and mobile apps. We launched www.ai-class.com and attracted over 130,000 students in 190+ countries.'"

Read more of this story at Slashdot.

0
Your rating: None