Skip navigation
Help

cad

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.

Though virtual machines have become indispensable in the server room over the last few years, desktop virtualization has been less successful. One of the reasons has been performance, and specifically graphics performance—modern virtualization products are generally pretty good at dynamically allocating CPU power, RAM, and drive space as clients need them, but graphics performance just hasn't been as good as it is on an actual desktop.

NVIDIA wants to solve this problem with its VGX virtualization platform, which it unveiled at its GPU Technology Conference in May. As pitched, the technology will allow virtual machines to use a graphics card installed in a server to accelerate applications, games, and video. Through NVIDIA's VGX Hypervisor, compatible virtualization software (primarily from Citrix, though Microsoft's RemoteFX is also partially supported) can use the GPU directly, allowing thin clients, tablets, and other devices to more closely replicate the experience of using actual desktop hardware.

Enlarge / NVIDIA's VGX K1 is designed to bring basic graphics acceleration to a relatively large number of users. NVIDIA

When last we heard about the hardware that drives this technology, NVIDIA was talking up a board with four GPUs based on its Kepler architecture. That card, now known as the NVIDIA VGX K1, is built to provide basic 3D and video acceleration to a large number of users—up to 100, according to NVIDIA's marketing materials. Each of this card's four GPUs uses 192 of NVIDIA's graphics cores and 4GB of DDR3 RAM (for a total of 768 cores and 16GB of memory), and has a reasonably modest TDP of 150 watts—for reference, NVIDIA's high-end GTX 680 desktop graphics card has a TDP of 195W, and the dual-GPU version (the GTX 690) steps this up to 300W.

Read 7 remaining paragraphs | Comments

0
Your rating: None

Thoroughly fascinating article in Smithsonian Magazine by Tony Perrottet on the overlooked biographical details of that legendary Casanova, Giacomo Casanova. The piece opens with a gob-smacking accounting of the serpentine path his celebrated memoir took, ending in its exalted cubby in the Bibliothèque nationale de France in Paris. Suffice it to say it includes a stop during the 19th century in a special cupboard for illicit books in the French National Library, called L’Enfer, or “the Hell.”

The story then turns to a vividly sketched outline of Casanova’s life – establishing a far, far more interesting character than, as Perrottet puts it, “a frivolous sexual adventurer, a cad and a wastrel.” In fact,

Giacomo Girolamo Casanova lived from 1725 to 1798, and was a far more intellectual figure than the gadabout playboy portrayed on film. He was a true Enlightenment polymath, whose many achievements would put the likes of Hugh Hefner to shame. He hobnobbed with Voltaire, Catherine the Great, Benjamin Franklin and probably Mozart; survived as a gambler, an astrologer and spy; translated The Iliad into his Venetian dialect; and wrote a science fiction novel, a proto-feminist pamphlet and a range of mathematical treatises. He was also one of history’s great travelers, crisscrossing Europe from Madrid to Moscow. And yet he wrote his legendary memoir, the innocuously named Story of My Life, in his penniless old age, while working as a librarian (of all things!) at the obscure Castle Dux, in the mountains of Bohemia in the modern-day Czech Republic.

In British terms, let’s say, this is all much more Richard Francis Burton than Flashman. Fascinating, and as Blackadder would say, “as French as a pair of self-removing trousers.”

As far as the art goes, above are some frisky watercolors by Auguste Leroux from the 1932 French edition of Casanova’s Histoire de ma Vie. Leroux was a celebrated illustrator who worked with Huysmans, Balzac, Stendhal and Flaubert… below are some fetching prints by Milo Manara inspired the the 1976 Fellini film. (My appreciation of their finest collaboration, A Trip to Tullum, here.)

Also, for your pleasure, a live cut of Roxy Music’s strutting tribute.

Roxy Music: Casanova: [download]

0
Your rating: None

Rilist

The obvious challenge in reviewing installation art is the inevitable “you had to be there” issue, relying as it does so much on real-time manipulation (in a non pejorative way). This is especially true of Ryoji Ikeda’s new data.anatomy (civic) piece which opened yesterday in Berlin, combining as it does a massively theatrical setting with a complex piece of video art.

Read more

0
Your rating: None

I suppose What You See Is What You Get has its place, but as an OCD addled programmer, I have a problem with WYSIWYG as a one size fits all solution. Whether it's invisible white space, or invisible formatting tags, it's been my experience that forcing people to work with invisible things they cannot directly control … inevitably backfires. A lot.

I have a distinctly Ghostbusters attitude to this problem.

Ghostbusters-logo

I need to see these invisible things, so that I can zap them with my proton pack. I mean, er, control them. And more importantly, understand them; perhaps even master them.

I recently had the great privilege of meeting Ted Nelson, who gave me an in-person demo of his ZigZag project and his perpetually in-progress since 1960 Xanadu project, currently known as Xanadu Space. But one thing he mentioned as he gave the demo particularly intrigued me. Being Ted Nelson, of course he went much further than my natural aversion to invisible, hidden markup in content – he insisted that markup and content should never be in the same document. Far more radical.

I want to discuss what I consider one of the worst mistakes of the current software world, embedded markup; which is, regrettably, the heart of such current standards as SGML and HTML. (There are many other embedded markup systems; an interesting one is RTF. But I will concentrate on the SGML-HTML theology because of its claims and fervor.)

There is no one reason this approach is wrong; I believe it is wrong in almost every respect.

I propose a three-layer model:

  1. A content layer to facilitate editing, content linking, and transclusion management.
  2. A structure layer, declarable separately. Users should be able to specify entities, connections and co-presence logic, defined independently of appearance or size or contents; as well as overlay correspondence, links, transclusions, and "hoses" for movable content.
  3. A special-effects-and-primping layer should allow the declaration of ever-so-many fonts, format blocks, fanfares, and whizbangs, and their assignment to what's in the content and structure layers.

It's an interesting, albeit extremely hand-wavy and complex, alternative. I'm unclear how you would keep the structure layer in sync with the content layer if someone is editing the content. I don't even know if there are any real world examples of this three layer approach in action. (And as usual, feel free to correct me in the comments if I've missed anything!)

Instead, what we do have are existing, traditional methods of intermixing content and markup ala HTML or TeX.

PDF vs. TeX

When editing, there are two possible interfaces:

  1. WYSIWYG where the markup layer is magically hidden so, at least in theory, the user doesn't ever have to know about markup and can focus entirely on the content. It is an illusion, but it is simple enough when it's working. The downside is that the abstraction – this idea that the markup is truly "invisible" – is rarely achieved in practice and often breaks down for anything except the most basic of documents. But it can be good enough in a lot of circumstances.
  2. Two windows where the markup is fully visible in one window, and shown as a live rendered preview in the other window, updated as you type, either side-by-side or top-and-bottom. Users have a dynamic sandbox where they can experiment and learn how markup and content interact in the real world, rather than having it all swept under the rug. Ultimately, this results in less confusion for intermediate and advanced users. That's why I'm particularly fond of this approach, and it is what we use on Stack Exchange. The downside is that it's a bit more complex, depending on whether or not you use humane markup, and it certainly takes a bit more screen space and thinking to process what's going on.

What I didn't realize is that there's actually a third editing option: keep the markup visible, and switch rapidly back and forth between the markup and rendered view with a single keystroke. That's what the Gliimpse project reveals:

Please watch the video. The nearly instantaneous and smooth transition that Gliimpse demonstrates between markup and preview has to be seen to be appreciated. The effect is a bit like Expose on the Mac, or Switcher on PC. I'm not sure how I feel about this, mainly because I don't know of any existing IDEs that even attempt to do anything remotely like it.

But I'd sure like to try it. As a software developer, it's incredibly frustrating to me that we have generational improvements in games like Skyrim and Battlefield 3 that render vastly detailed, dynamic worlds at 60 frames per second, yet our source code editors are advancing only in tiny incremental steps, year after year.

0
Your rating: None

What was Microsoft's original mission?

In 1975, Gates and Allen form a partnership called Microsoft. Like most startups, Microsoft begins small, but has a huge vision – a computer on every desktop and in every home.

The existential crisis facing Microsoft is that they achieved their mission years ago, at least as far as the developed world is concerned. When was the last time you saw a desktop or a home without a computer? 2001? 2005? We're long since past the point where Microsoft's original BHAG was met, and even exceeded. PCs are absolutely ubiquitous. When you wake up one day to discover that you've completely conquered the world … what comes next?

Apparently, the Post PC era.

Microsoft never seemed to recover from the shock of achieving their original 1975 goal. Or perhaps they thought that they hadn't quite achieved it, that there would always be some new frontier for PCs to conquer. But Steve Jobs certainly saw the Post PC era looming as far back as 1996:

The desktop computer industry is dead. Innovation has virtually ceased. Microsoft dominates with very little innovation. That's over. Apple lost. The desktop market has entered the dark ages, and it's going to be in the dark ages for the next 10 years, or certainly for the rest of this decade.

If I were running Apple, I would milk the Macintosh for all it's worth – and get busy on the next great thing. The PC wars are over. Done. Microsoft won a long time ago.

What's more, Jobs did something about it. Apple is arguably the biggest (and in terms of financials, now literally the biggest) enemy of general purpose computing with the iPhone and iPad. These days, their own general purpose Mac operating system, OS X, largely plays second fiddle to the iOS juggernaut powering the iPhone and iPad.

Here's why:

Apple-cumulative-sales

The slope of this graph is the whole story. The complicated general purpose computers are at the bottom, and the simpler specialized computers are at the top.

I'm incredibly conflicted, because as much as I love the do-anything computer …

  • I'm not sure that many people in the world truly need a general purpose computer that can do anything and install any kind of software. Simply meeting the core needs of browsing the web and email and maybe a few other basic things covers a lot of people.
  • I believe the kitchen-sink-itis baked into the general purpose computing foundations of PCs, Macs, and Unix make them fundamentally incompatible with our brave new Post PC world. Updates. Toolbars. Service Packs. Settings. Anti-virus. Filesystems. Control panels. All the stuff you hate when your Mom calls you for tech support? It's deeply embedded into of the culture and design of every single general purpose computer. Doing potentially "anything" comes at a steep cost in complexity.
  • Very, very small PCs – the kind you could fit in your pocket – are starting to have the same amount of computing grunt as a high end desktop PC of, say, 5 years ago. And that was plenty, even back then, for a relatively inefficient general purpose operating system.

But the primary wake up call, at least for me, is that the new iPad finally delivered an innovation that general purpose computing has been waiting on for thirty years: a truly high resolution display at a reasonable size and price. In 2007 I asked where all the high resolution displays were. Turns out, they're only on phones and tablets.

iPad 2 display vs iPad 3 display

That's why I didn't just buy the iPad 3 (sorry, The New iPad). I bought two of them. And I reserve the right to buy more!

iPad 3 reviews that complain "all they did was improve the display" are clueless bordering on stupidity. Tablets are pretty much by definition all display; nothing is more fundamental to the tablet experience than the quality of the display. These are the first iPads I've ever owned (and I'd argue, the first worth owning), and the display is as sublime as I always hoped it would be. The resolution and clarity are astounding, a joy to read on, and give me hope that one day we could potentially achieve near print resolution in computing. The new iPad screen is everything I've always wanted on my desktops and laptops for the last 5 years, but I could never get.

Don't take my word for it. Consider what screen reading pioneer, and inventor of ClearType, Bill Hills has to say about it:

The 3rd Generation iPad has a display resolution of 264ppi. And still retains a ten-hour battery life (9 hours with wireless on). Make no mistake. That much resolution is stunning. To see it on a mainstream device like the iPad - rather than a $13,000 exotic monitor - is truly amazing, and something I've been waiting more than a decade to see.

It will set a bar for future resolution that every other manufacturer of devices and PCs will have to jump.

And the display calibration experts at DisplayMate have the measurements and metrics to back these claims up, too:

… the new iPad’s picture quality, color accuracy, and gray scale are not only much better than any other Tablet or Smartphone, it’s also much better than most HDTVs, laptops, and monitors. In fact with some minor calibration tweaks the new iPad would qualify as a studio reference monitor.

Granted, this is happening on tiny 4" and 10" screens first due to sheer economics. It will take time for it to trickle up. I shudder to think what a 24 or 27 inch display using the same technology as the current iPad would cost right now. But until the iPhone and iPad, near as I can tell, nobody else was even trying to improve resolution on computer displays – even though all the existing HCI research tells us that higher resolution displays are a deep fundamental improvement in computing.

At the point where these simple, fixed function Post-PC era computing devices are not just "enough" computer for most folks, but also fundamentally innovating in computing as a whole … well, all I can say is bring on the post-PC era.

[advertisement] What's your next career move? Stack Overflow Careers has the best job listings from great companies, whether you're looking for opportunities at a startup or Fortune 500. You can search our job listings or create a profile and let employers find you.

0
Your rating: None

Would this have been better with a giant '2' crudely imposed in place of the 's'?
At this rate, everybody will soon see their favourite developer starting some kicks or kicking some starters. Brian Fargo, creator of Wasteland, the game that launched a thousand Fallouts, has espied the queue of well-regarded figures approaching their adoring audience cap in hand and is now seeking a cap of his own. It’ll be a comically large bit of headwear as he wants to cram at least a million dollars into it, which is the estimated cost of funding a Wasteland sequel. The game would be a PC release, with, according to the man’s own Twittertalk, a “complete old school vibe and made with input from gamers. Made the gamers way.” The gamers way often involves eating Wotsits until dawn but perhaps there are other ways and other gamers?

(more…)

0
Your rating: None

Electronic music making has had several major epochs. There was the rise of the hardware synth, first with modular patch cords and later streamlined into encapsulated controls, in the form of knobs and switches. There was the digital synth, in code and graphical patches. And there was the two-dimensional user interface.

We may be on the cusp of a new age: the three-dimensional paradigm for music making.

AudioGL, a spectacularly-ambitious project by Toronto-based engineer and musician Jonathan Heppner, is one step closer to reality. Three years in the making, the tool is already surprisingly mature. And a crowd-sourced funding campaign promises to bring beta releases as soon as this summer. In the demo video above, you can see an overview of some of its broad capabilities:

  • Synthesis, via modular connections
  • Sample loading
  • The ability to zoom into more conventional 2D sequences, piano roll views, and envelopes/automation
  • Grouping of related nodes
  • Patch sharing
  • Graphical feedback for envelopes and automation, tracked across z-axis wireframes, like circuitry

All of this is presented in a mind-boggling visual display, resembling nothing more than constellations of stars.

Is it just me, or does this make anyone else want to somehow combine modular synthesis with a space strategy sim like Galactic Civilizations? Then again, that might cause some sort of nerd singularity that would tear apart the fabric of the space-time continuum – or at least ensure we never have any normal human relationships again.

Anyway, the vitals:

  • It runs on a lowly Lenovo tablet right now, with integrated graphics.
  • The goal is to make it run on your PC by the end of the year. (Mac users hardly need a better reason to dual boot. Why are you booting into Windows? Because I run a single application that makes it the future.)
  • MIDI and ReWire are onboard, with OSC and VST coming.
  • With crowd funding, you’ll get a Win32/64 release planned by the end of the year, and betas by summer (Windows) or fall/winter (Mac).

I like this quote:

Some things which have influenced the design of AudioGL:
Catia – Dassault Systèmes
AutoCAD – Autodesk
Cubase – Steinberg
Nord Modular – Clavia
The Demoscene

Indeed. And with computer software now reaching a high degree of maturity, such mash-ups could open new worlds.

Learn about the project, and contribute by the 23rd of March via the (excellent) IndieGogo:

http://audiogl.com

Tweet

0
Your rating: None