Skip navigation
Help

Holography

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.
Original author: 
Megan Gibson

Many artists perceive power in movement. Photographer and visual artist Chris Levine seeks to illuminate the power inherent in stillness.

His larger-than-life subjects — which include Queen Elizabeth II and singer Grace Jones — might be among the most photographed people in the world, but Levine has a knack for capturing them at rest, as if in the calm of a storm. “Every opportunity I got [to shoot a portrait], I tried to distill it back to just pure essence without any suggestion or iconography or anything,” he told TIME during a recent visit to his studio in Oxfordshire, England, ahead of his solo retrospective show at The Fine Art Society on May 17. “I’m experimenting with that and trying to get stillness in the image.”

He says the challenge as a photographer is to distance himself from the idea of his subject  and focus on the person he has right in front of his lens. In a recent sitting with Kate Moss, Levine says he was determined to ignore Kate Moss, the supermodel, and instead tried “to bring her back, just to Kate – Kate, Kate, Kate.” In doing this, he manages to take one of the fashion world’s most recognizable faces and show it in a new light.

Which may explain why an artist who largely focuses on lights, lasers and holography — as Levine has done since his student days at London’s Chelsea School of Art; his light installations will be included in the retrospective at The Fine Art Society — has made a name for himself in recent years for his portraits. The Canadian-born Brit, now 41, says that he never expected to be shooting icons at this stage in his career. In fact, back in 2004, when he received a call from Buckingham Palace asking him to shoot a portrait of the Queen, Levine initially thought it was a prank. “I thought it was a hoax at first! Seriously, I really did. It just seemed so far-fetched.”

Once Levine was sufficiently convinced that it was not a ruse but a Royal request, he went to work preparing lights and equipment, wanting to put his knowledge of light and holography to use capturing the monarch in a truly modern fashion. Setting up the visual light equipment in Buckingham Palace took Levine about three days – “and it took every second,” he recalls – and the shoot itself took about an hour and a half. However, the resulting images, including Lightness of Being as well as the shot selected for TIME’s cover on the Queen’s Diamond Jubilee in 2012, are arresting and timeless.

“I think [these images] struck such a chord because it’s going somewhere into a more spiritual dimension and into a deeper realm,” he says. ”It’s what we are but people don’t very often connect with it.”

Chris Levine: Light 3.142 is on display from May 17 to June 15, 2013 at The Fine Art Society in London.

Chris Levine is a Canadian born light artist based in the United Kingdom.

Megan Gibson is a writer and reporter at the London bureau of TIME. Find her on Twitter at @MeganJGibson.

0
Your rating: None

Whether you love or hate TRON: Legacy, few would argue it’s a beautiful movie. Every penny of the Disney sequel’s reported $170 million budget is undeniably on screen in the form of all kinds of different special effects.

Bradley Munkowitz, aka GMUNK, was the lead animated graphics artist on the film and led a team that created about 10 minutes of those special effects. But not some big action set piece like one usually thinks of when talking about special effects. He created the user interfaces and holograms found in various scenes. In the world of Tron, those visuals are some of the most important things on the Grid. GMunk has uploaded six videos highlighting his work from the film, in context. They play like etherial, effects heavy movie/music videos. Check them out below.

All the below videos are from VFX Breakdowns. What you’ll see are edits of the final film with either the UI or holograms the animated graphics team created for that particular scene.

TRON GFX Disc Game from GMUNK on Vimeo.

TRON GFX Throne Room from GMUNK on Vimeo.

TRON GFX Rectifier Globe from GMUNK on Vimeo.

TRON GFX Fireworks from GMUNK on Vimeo.

TRON GFX Portal Climax from GMUNK on Vimeo.

TRON GFX Solar Sailor from GMUNK on Vimeo.

Which video do you like best?

0
Your rating: None

MrSeb writes "Electrical engineers and material scientists at MIT have created a fiber-borne laser that could be woven to form a flexible display that could project different 3D images in any number of directions, to any number of viewers. MIT's fiber is similar to standard telecoms fiber, but it has a tiny droplet of fluid embedded in the core. When laser light hits the fluid, it scatters, effectively creating a 360-degree laser beam. The core is then surrounded by layers of liquid crystal, which can be controlled like 'pixels,' allowing the laser light to escape from specific points anywhere along the length of the fiber. This means that you could have a display that shows one picture on the 'front' and another on the 'back' — or different, glasses-free 3D images for everyone sitting in front and behind. In the short term, the laser fiber is more likely to have a significant application in photodynamic therapy, an area of medicine where drugs are activated using light. Photodynamic therapy is one of the only ways to treat cancer in a relatively non-invasive and non-toxic manner. MIT's laser could be threaded into almost any part of the body, where the ability to produce pixels of laser light at any point along its length would make it a highly accurate device."


Share on Google+

Read more of this story at Slashdot.

0
Your rating: None

How to show the player in space shooter game the real size of objects around him? How can you "feel" that a tunnel is wide 20 meters and not 2 meters or even 200 meters?

0
Your rating: None

The holodeck: a room that can create an interactive 3-D hologram of just about any environment you can think of. It’s been the dream of Star Trek nerds ever since The Next Generation debuted on TV. Well brace yourself, Trekkies, and try not to soil those Starfleet unitards in glee. The U.S. intelligence community had heard your prayers and is now taking a step towards building its own holographic simulator.

Iarpa, the intelligence community’s advanced research outfit, announced this month that’s it’s embarking on a Synthetic Holographic Observation (SHO) program, a quest to build a system that lets intel analysts collaborate with each other using interactive 3-D holographic displays.

Before you get too excited, SHO isn’t going to be an exact replica of the holodeck. Instead of a geometrically patterned room on board the Enterprise, the holograms will come from workstations here on earth. While Iarpa’s announcement promises “dynamic, color, high-performance” holograms, the all-around holographic environment that’s indistinguishable from reality is still a long ways off. In the meantime, Iarpa’s program will rely on synthetic, electronically reproduced light fields.

SHO is bringing part of the holodeck concept one small step closer to reality, though. The program is aimed at generating 3-D displays that let analysts get a better feel for the mountains of imagery that the intelligence community collects. In particular, SHO needs to render conventional imagery and LIDAR (light detection and ranging) into holographic light fields. LIDAR bounces beams of light off objects in a manner not too different from conventional radar, allowing users to quickly make 3-D images and maps.

Just generating a hologram from aerial imagery isn’t enough, though. SHO needs to be able to let multiple analysts work together on the same image at the same time. To do that, it has to be interactive. Iarpa’s asking prospective builders to make a hologram that analysts can navigate and manipulate in ways that regular maps don’t allow.

This isn’t the defense world’s first foray into the world of holograms. Some projects, like the “Face of Allah,” have aimed at beaming a 3-D image of a deity over the battlefield in hopes of striking fear of the divine into the hearts of the enemy. Darpa’s contract with Vuzix of Rochester, New York is a little closer to SHO’s goals. Vuzix is building eyewear that would give troops on the ground a holographic image of nearby air assets and allow them to call in airstrikes with greater precision.

Unlike the battlefield hologlasses, eyewear-based devices need not apply for SHO. Iarapa wants its holographic displays to be visible to the naked eye.

Analysts’ eyeballs are a special concern for IARPA, too. Conventional 3-D technology can lead to eye strain when used for long periods of time. Iarpa needs analysts to use SHO for long periods of time so those pitching a holographic system need to make systems that are easy on the eyes over extended use.

Iarpa’s announcement provides a few examples of how they’d like to use their holographic system. During the testing phase, it wants to see how holograhic systems work on LIDAR data of urban environments and terrain, and conventional imagery of buildings and airspace.

But for a more real-life example of how holographic displays could be useful, take the bin Laden raid as a test case. In that instance, the intel community’s imagery nerds used satellites and airborne sensors to snap all kinds of imagery of the terror leader’s Abbotabad crib. That imagery helped Navy SEALs build a real life mockup of Chez bin Laden at Bagram Air Field. And it may have led the National Geospatial Intelligence Agency — the intel community imagery exploitation shop — to make virtual models of the compound with its software.

If SHO can move past the prototype phase, imagery analysts would be able to quickly generate immersive models of a high-value target’s lair. Multiple analysts and personnel could take a virtual stroll through the building and help plot a raid without ever having to visit the real-world replica.

See Also:

0
Your rating: None