Skip navigation
Help

google glass

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.
Original author: 
Cyrus Farivar


Stephen Balaban is a co-founder of Lambda Labs, based in Palo Alto and San Francisco.

Cyrus Farivar

PALO ALTO, CA—Even while sitting in a café on University Avenue, one of Silicon Valley’s best-known commercial districts, it’s hard not to get noticed wearing Google Glass.

For more than an hour, I sat for lunch in late May 2013 with Stephen Balaban as he wore Google's new wearable tech. At least three people came by and gawked at the newfangled device, and Balaban even offered to let one woman try it on for herself—she turned out to be the wife of famed computer science professor Tony Ralston.

Balaban is the 23-year-old co-founder of Lambda Labs. It's a project he hopes will eventually become the “largest wearable computing software company in the world.” In Balaban's eyes, Lambda's recent foray into facial recognition only represents the beginning.

Read 31 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Eric Johnson

Oculus VR's Palmer Luckey, left, and Nate Mitchell, right. At center, AllThingsD's Lauren Goode tries out the Oculus Rift at CES 2013.

Oculus VR’s Palmer Luckey, left, and Nate Mitchell, right. At center, AllThingsD’s Lauren Goode tries out the Oculus Rift at CES 2013.

This is the second part of our two-part Q&A with Palmer Luckey and Nate Mitchell, the co-founders of virtual-reality gaming company Oculus VR. In Part One, Luckey and Mitchell discussed controlling expectations, what they want from developers, and the challenges of trying to make games do something radically different.

AllThingsD: What do you guys think about Google Glass? They’ve got their dev kits out right now, too, and –

Palmer Luckey: — What’s Google Glass? [laughs]

No, seriously, they’re doing something sort of similar with getting this wearable computing device to developers. Does the early buzz about Glass worry you?

Luckey: No. They’re not a gaming device, and they’re not a VR device, and they’re not an immersive device, and they’re five times more expensive than us.

Nate Mitchell: It’s just a completely different product. Wearable computing is super-interesting, and we’d love to see more wearable computing projects in the market. At Oculus, especially, we’re excited about the possibilities of Google Glass. We’ve seen it, we’ve checked it out, it’s very cool. But if you bring them together –

Luckey: Our image size is like 15 times larger than theirs. It’s like the difference between looking at a watch screen and a 60-inch monitor. It’s just an enormous difference.

snlglass

Mitchell: With the Rift, you’re in there. You’re totally immersed in the world. I think one of the things people keep bringing up (with Glass) is the awkward, the social aspect. For the Rift, you strap into this thing, and you’re gone.

Luckey: It’s about being inside the virtual world, not caring about the real one.

Mitchell: You could put your Glass on in the virtual space.

Luckey: We could do that! We could simulate Glass. … It’s not that hard. You just have a tiny heads-up display floating there. A really tiny one.

Mitchell: I like it.

“Okay, Rift, take a picture. Okay, Rift, record a video …”

Luckey: There’s actually Second Life mods like that. People sell heads-up displays that you can buy.

Mitchell: Really?

Luckey: And they put information in there like distance to waypoints and stuff.

Mitchell: Oh, that’s cool!

Luckey: Yeah, they overlay it on the screen when your character’s wearing it.

I never really “got” Second Life. Minecraft, I can wrap my head around quickly. But Second Life …

Luckey: It’s very difficult to get into. There’s a steep learning curve. The last time I went into Second Life was to buy bitcoins from a crazy guy who was selling them below market value, but you had to go into Second Life to meet with him.

Mitchell: The underbelly of the Internet.

Luckey: They’re actually working on Oculus Rift support, though. The kind of people who make games like Second Life definitely see the potential for virtual reality — being able to step into your virtual life.

And if you’re completely immersed in the game, I guess that wherever you’re playing, you need to trust whoever’s around you.

Mitchell: Absolutely. There’s already some sneaking up on people happening in the office. Someone’s developing, they’re testing the latest integration, and then Palmer comes up and puts his hands on their shoulders: “Heyyyy, Andrew! What’s going on?” There’s a trust factor.

Luckey: Have you seen the Guillotine Simulator? (video below) Some people are showing that without even telling the person what it is: “Here, check this out!” “Whoa, what’s going on?” And then — [guillotine sound effect]

Mitchell: One thing that that does lead into is, we’re exploring ways to just improve the usability of the device. When you put on the Rift, especially with the dev kit, you’re shut off from the outside world. What we’re looking at doing is how can we make it easy to pull it off. Right now, you have to slip it over your head like ski goggles. The dev kit was designed to be this functional tool, not the perfect play-for-10-hours device. With the consumer version, we’re going for that polished user experience.

What about motion sickness? Is it possible to overcome the current need for people to only play for a short period of time on their first go?

Luckey: The better we make the hardware, the easier it’ll be for people to just pick up and play. Right now, the hardware isn’t perfect. That’s one of the innate problems of VR: You’re trying to make something that tricks your brain into thinking it’s real. Your brain is very sensitive at telling you things are wrong. The better you can make it, the more realistic you can make it, the more easily your brain’s gonna accept the illusion and not be throwing warning bells.

You mentioned in one of your recent speeches that the Scout in Team Fortress 2 –

Luckey: — he’s running at like 40 miles per hour. But it’s not just, “Oh, I’m running fast.” It’s the physics of the whole thing. In real life, if you are driving at 40mph, you can’t instantly start moving backward. You can’t instantly start strafing sideways. You have inertia. And that’s something that, right now, games are not designed to have. You’re reacting in these impossible ways.

Mitchell: In that same vein, just as Palmer’s saying the hardware’s not perfect yet, a huge part of it is the content.

Luckey: You could make perfect hardware. Pretend we have the Matrix. Now you take someone and put them in a fighter jet and have them spinning in circles. That’s going to make someone sick no matter how good it is, because that actually does make people sick. If you make perfect hardware, and then you do things that make people sick in real life, you’re gonna make them sick in VR, too. Right now, there’s lots of things going on in games that don’t make people sick only because they’re looking at them on a screen. Or, in so many games, they’ll have cutscenes where they take control of the camera and shake it around. You don’t want to do that in VR because you’re not actually shaking around in real life.

You’re changing the experience that you have previously established within VR.

Mitchell: It breaks the immersion.

Luckey: And that’s why it’s so hard to instantly transfer. In the original version of Half Life 2, when you’d go into a new space for the first time, the whole game would just freeze for a second while it loads. It’s just a short freeze, but players were running along or driving along and all of a sudden, jjt! Now it looks like the whole world’s dragging along with you, and a lot of people feel very queasy when that happens.

Mitchell: It comes back to content. My talk at GDC was very specifically about how developing for VR is different from a 2-D monitor. All those things like cutscenes, storytelling, scale of the world — if the player is at four feet on the 2-D monitor and you put them in there, they immediately notice. They look down and they have the stereo cues: “I’m a midget!” So you make them taller, and now they don’t fit through doors. We really do believe that, at first, you’re going to see these ports of existing games, but the best “killer app” experiences are going to come from those made-for-VR games.

Luckey: And that’s not even to say it has to be new franchises. It doesn’t have to be a new type of game. But you want the content to be designed specifically for the hardware.

Mitchell: It’s just like the iPhone. The best games come from developers pairing hardware and software.

 Dive Into Media.

Oculus VR CEO Brendan Iribe testing out the Rift at D: Dive Into Media.

And that’s the 10,000-foot view: Does VR change game design in a fundamental way?

Mitchell: Yes. Fundamentally. Absolutely. I think, right now, there’s this great renaissance in the indie community. Indie developers are doing awesome things. If you look at games like The Walking Dead, you’ve got the mainstream genres here. You’re going to have a lot of these indie games start to feel more natural in virtual reality, because that’s almost, like, the intended experience.

Luckey: And not to invent a whole new genre on the fly, but you don’t see many first-person card games or something. There’s a lot of card game videogames, but there’s not many that are first-person because it wouldn’t make any sense to do.

Like a poker game where you could look around the table and read people’s reactions?

Mitchell: Exactly.

Luckey: And you could have all kinds of things integrated into it. I guess that would fit into the first-person-shooter genre, but not really, because you’re not moving and you’re not shooting. You’re just playing cards.

Mitchell: And if you look at the research that’s been done on virtual characters, it’s the type of thing where, if you smile at me in VR, even if you’re an NPC (non-playable character), I’m much more likely to smile back. Your brain is tricked into believing you’re there.

Luckey: There’s also fascinating research on confidence levels in VR, even tiny things. There was a study where a bunch of people performed tasks in real life, in a control group, and then performed them in VR. And the only difference is that one group in VR was about six inches taller than the other group. So, one was shorter than the NPC they were interacting with, one was taller. Universally, all of the “taller” people exhibited better negotiation with the NPCs. Then, they took them out (of the VR simulation) and they redid the (real-world) study, putting everyone back in another trial with a physical person. The people who’d been tall in VR and negotiated as a taller person did better when they went back into the real negotiation as well. It’s ridiculous.

Mitchell: That’s the sort of thing we’re super-excited about. That’s the dream.

And do you have a timeline for when –

Mitchell: When the dream comes to fruition?

Luckey: It’s a dream, man! Come on! [laughs]

Not when it comes to fruition. Are there milestones for specific accomplishments along the way?

Luckey: Sure, we have them, internally. [laughs]

Mitchell: We have a road map, but like we keep saying, a huge part of this is content. Without the content, it’s just a pair of ski goggles.

Luckey: And we don’t even know, necessarily, what a road map needs to look like. We’re getting this feedback, and if a lot of people need a certain feature — well, that means it’s going to take a little longer.

Mitchell: But we have a rough road map planned, and a lot of exciting stuff planned that I think you’ll see over the course of the next year.

And is there a timeline for when the first consumer version comes out?

Mitchell: It’s TBD. But what we can say is, Microsoft and Sony release their dev kits years in advance before they get up onstage and say, “The Xbox One is coming.” We went for the same strategy, just open and publicly.

Luckey: And we don’t want to wait many years before doing it.

Mitchell: Right. So, right now, we’re giving developers the chance to build content, but they’re also co-developing the consumer version of the Rift with us. Once everyone’s really happy with it, that’s when you’ll see us come to market.

Luckey: And not sooner. We don’t want to announce something and then push for that date, even though we know we can make it better.

IMG_4929

And what about the company, Oculus VR? Is this dream you’re talking about something you have to realize on your own? Do you want to someday get acquired?

Luckey: Our No. 1 goal is doing it on our own. We’re not looking to get acquired, we’re not looking to flip the company or anything. I mean, partnering with someone? Sure, we’re totally open to discussions. We’re not, like, we want to do this with no help.

But you wouldn’t want to be absorbed into a bigger company that’s doing more than just VR.

Mitchell: The goal has been to build great consumer VR, specifically for gaming. We all believe VR is going to be one of the most important technologies of –

Luckey: — ever!

Mitchell: Basically.

Not to be too hyperbolic or anything.

Luckey: It’s hard not to be. It’s like every other technological advance could practically be moot if you could do all of it in the virtual world. Why would you even need to advance those things in the real world?

Mitchell: Sooo …

Luckey: [laughs]

Mitchell: With that in mind, we have to figure out how we get there. But right now, we’re doing it on our own.

Luckey: And we think we can deliver a good consumer VR experience without having to partner with anyone. We’re open to partnering, but we don’t think we have to. We’re not banking on it.

And how does being based in southern California compare to being closer to a more conventional tech hub like Silicon Valley?

Mitchell: Recruiting is a little harder for us. But overall, we’ve been able to attract incredible talent.

Luckey: And if you’re in Silicon Valley, it’s probably one of the easiest places to start a company in terms of hiring people. But VR is such a tiny field, it’s not like all of a sudden we’re going to go to Silicon Valley and there’s, like, thousands of VR experts. Now, if I’m a Web company or a mobile company –

Mitchell: — that’s where I’d want to be.

Luckey: But in this case, these people aren’t necessarily all up in Silicon Valley. We’ve hired a bunch of people from Texas and Virginia and all these other places. It’s a niche industry. We actually have the biggest concentration of people working in consumer VR right now. And a lot of the top talent we get, they don’t care where we are, as long as it’s not, like, Alaska. They just really want to work on virtual reality, and there’s no one else doing it like we are.

0
Your rating: None
Original author: 
Andrew Cunningham


I log some face-on time with Glass at Google I/O.

Florence Ion

"When you're at a concert and the band takes the stage, nowadays 50,000 phones and tablets go into the air," said Google Senior Development Advocate Timothy Jordan in the first Google Glass session of this year's Google I/O. "Which isn't all that weird, except that people seem to be looking at the tablets more than they are the folks onstage or the experience that they're having. It's crazy because we love what technology gives us, but it's a bummer when it gets in the way, when it gets between us and our lives, and that's what Glass is addressing."

The upshot of this perspective is that Glass and its software is designed for quick use. You fire it up, do what you want to do, and get back to your business without the time spent diving into your pocket for your phone, unlocking it, and so on. Whether this process is more distracting than talking to someone with Glass strapped to his or her face is another conversation, but this is the problem that Google is attempting to solve.

Since Google I/O is a developer's conference, the Glass sessions didn't focus on the social implications of using Glass or the privacy questions that some have raised. Rather, the focus was on how to make applications for this new type of device, something that is designed to give you what you want at a moment's notice and then get out of the way. Here's a quick look at what that ethos does to the platform's applications.

Read 22 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Ina Fried

Although Google is offering a limited set of developer tools for Glass — and more are on the way — the company doesn’t want to stop hackers from tinkering even further.

google_glass_penguin

Indeed, during a developer conference session on Thursday, Google showed a variety of ways to gain deeper access to Glass. Some, such as running basic Android apps and even connecting a Bluetooth keyboard, can be done.

Google showed other hacks, such as running a version of Ubuntu Linux. Those actions, though, require deeper “root” access to the device. Google showed how developers can get such access, but cautions that doing so voids the warranty and could be irreversible.

That said, Google plans to make its factory image available so in most cases rooted Glass devices should be able to be returned to their original settings.

The session ended with a video showing a pair of the pricey specs being blended to a powdery mess, to heartfelt groans from the packed audience, many of whom forked over $1,500 to be among the first to buy the developer edition of Glass.

Showing a different level of interest in Glass, several members of Congress sent a letter to Google CEO Larry Page on Thursday asking questions about privacy issues raised by the high-tech specs.

Update: At a follow-up Fireside Chat session with developers, Google reiterated that a software development kit for Glass is coming, but Google’s Charles Mendis said not to expect it soon.

Isabelle Olsson, the lead designer for Glass, showed off one of the bulky early prototype designs for Glass as well as a current prototype that combines Glass with prescription glasses.

Prescription Google Glass prototype

Prescription Google Glass prototype

Olsson, who quips that she has been working on Glass since it was a phone attached to a scuba mask, said that the development of Glass was “so ambitious and very messy.”

Getting the device light enough has been a key, Olsson said.

“If it is not light you are not going to want to wear it for more than 10 minutes,” Olsson said. “We care about every gram.”

Asked what kind of apps the Glass team would like to see, Olsson said she wanted a karaoke app, while Mendis said he would like to see some fitness apps.

Google Glass product director Steve Lee said Glass is designed around brief glances or “micro-interactions,” rather than watching a movie or reading an entire book.

“That would be painful,” Lee said. “We don’t want to create zombies staring into the screen for long periods of time.

RELATED POSTS:

0
Your rating: None
Original author: 
Casey Johnston


A demo of how to use the mirror API and its output during Timothy Jordan's talk.

If you’re looking for a taste of what it will be like to develop for Google Glass, the company posted a video demonstrating the hardware and a little bit of the API on Thursday. Timothy Jordan, a senior developer advocate at Google, gave a talk at SXSW in early March that lasted just shy of an hour and gave a look into the platform.

Google Glass bears more similarity to the Web than the Android mobile operating system, so developing for it is simpler than creating an Android application. During the talk, Jordan goes over some the functionality developers can get out of the Mirror API, which allows apps to pop Timeline Cards into a user’s view, as well as show new items from services the user might be subscribed to (weather, wire services, and so forth).

Jordan also shows how users can interact with items that crop up using the API. When the user sees something they like, for instance, they can re-share it with a button or “love” it.

Read 1 remaining paragraphs | Comments

0
Your rating: None

Augmented reality for mobile devices has grown in popularity in recent years partly because of the proliferation of smart phones and tablet computers equipped with exceptional cameras and partly because of developments in computer vision algorithms that make implementing such technologies on embedded systems possible.

Said augmented reality applications have always been limited to a single user receiving additional information about a physical entity or interacting with a virtual agent. Researchers at MIT’s Media Lab have taken augmented reality to the next level by developing a multi-user collaboration tool that allows users to augment reality and share that we other users essentially turning the real world into a digital canvas for all to share.

The Second Surface project as it is known is described as,

…a novel multi-user Augmented reality system that fosters a real-time interaction for user-generated contents on top of the physical environment. This interaction takes place in the physical surroundings of everyday objects such as trees or houses. The system allows users to place three dimensional drawings, texts, and photos relative to such objects and share this expression with any other person who uses the same software at the same spot.

If you still have difficulty understanding how this works and why I believe when made available to the general masses it will be a game changing technology for augmented reality and mobile devices, check out the following explanatory video.

Now, imagine combining this technology with Google Glass and free-form gesture recognition. How awesome would that be?

[source]

0
Your rating: None

Google today invited the people who signed up for the $1,500 developer edition of its Project Glass wearable computing device to a set of developer events in San Francisco and New York City.

The company didn’t give a terrific amount of notice; the events are at the end of January and beginning of February, respectively. But developers will “have a device to use while on-site,” which is the real attraction.

The company said each Glass Foundry event will include “two days of full-on hacking” for people who have already signed up for the Glass Explorer Edition.

There’s still no release date for that device, nor a later product for the general public.

A Google spokesperson said of the event, “We’re looking forward to what developers will do with Glass, but we don’t have more details to share at this time.”

The Glass “Mirror API” is supposed to be a familiar environment for developers of RESTful Web services. Here’s a video explaining a bit about that:

And here’s the email:

Join us for an early look at Glass and two full days of hacking on the upcoming Google Mirror API in San Francisco or New York. These hackathons are just for developers in the Explorer program and we’re calling them the Glass Foundry. It’s the first opportunity for a group of developers to get together and develop for Glass.

We’ll begin the first day with an introduction to Glass. You’l have a device to use while on-site. Next we’ll take a look at the Mirror API, which gives you the ability to exchange data and interact with the user over REST. We’ll then dive into development with Google engineers on site to help you at any point. At the end of the second day we’ll have a lively round of demos with some special guest judges.

If you’d like to attend this first Glass Foundry, please choose and register by Friday, January 18th at 4pm PT. There is limited space. If you are accepted, you will receive a confirmation letter with additional details and required terms after registration closes. Please don’t make any travel arrangements until your attendance is confirmed.

Glass Foundry San Francisco
January 28th & 29th at Google SF

Glass Foundry New York
February 1st & 2nd and Google NYC

0
Your rating: None

At Google's Zeitgeist conference, its chairman, Eric E. Schmidt, described a long-term future in which life is managed by robots - and one a little bit closer to reality, in which billions more people can get access to information with new devices and connectivity.

0
Your rating: None

proejct_glass_350

Google made quite a splash with its Project Glass video earlier this month. While Google’s vision of wearable computing still looks a bit like science fiction today, a new report by Forrester analyst Sarah Rotman Epps argues that “in three years, wearables will matter to every product strategist” and that smart developers should start experimenting with applications for wearables on the “big five” platforms (Apple, Google, Microsoft, Amazon and Facebook) today.

In Rotman Epps’ vision of wearable computing in the near future, one of these major platforms will have to back the concept for it to go mainstream.

Specifically, she notes that Apple, with its “polished marketing, channel, and brand,” could use its vast developer ecosystem to incubate many of these projects by giving even it’s more low-end products (like the iPod nano) support for more sensors, WiFi and Bluetooth.

Google, says Rotman Epps, could become a major player due to the open nature of its Android platform. Android, after all, is already being uses by basic wearable devices like the Sony SmartWatch and the Wimm One. She also warns, though, that Google’s “diffuse attention and lack of channel” will make it hard for the company to actually turn those ideas into products.

Microsoft, with its operating systems optimized for mobile and its Kinect sensor, as well as Amazon with its vast product catalog and Facebook with its rich social data could also play a major role in making wearable computing mainstream.

Indeed, Forrester’s analysts think wearables will follow a similar path to that of the smartphone market: In the first phase, Apple will create an early app and accessory ecosystem for wearable computing. Google’s open platform, however, will give developers more freedom and broader wearable experimentation. Microsoft, thanks to its recent shift toward open web standards, will then be able to offer something akin to an “anti-platform” platform for a future operating system for wearables that could be even more flexible than Apple’s and Google’s offerings.

In Forrester’s view, then, smart developers and product strategists should start to cultivate partnerships with apparel companies like Nike and Adidas now and those companies should also start to reach out to the developer community and the big five platforms.

0
Your rating: None