Fish image via Ivelin Radkov
The story of Zite has been a whirlwind. We launched on March 9, 2011, and closed our acquisition by CNN on Aug. 30 of the same year — just under six months later.
But acquisition was not our original plan. We never built the company with the intention of getting acquired. When we launched Zite, we were thrilled to get such a great reception from the press and hundreds of thousands of new users. Our goal was to use that influx of users to secure series A funding to build a team and compete effectively in a crowded market. But fate intervened, and we got an attractive acquisition offer from CNN, a company that believed in our vision. In hindsight, I can see that there were a few really smart things that we did that made us an excellent acquisition target.
My goal for this article isn’t to give you a silver bullet for getting your company acquired, but rather to offer some insight into what I think are the key reasons that Zite was able to move from launch to transaction in such a compressed timeline.
- Have a huge product launch — It doesn’t matter how good your product is if people don’t know about it. Once we believed we had the right product, we marketed it very hard. We spent much more money on PR surrounding our launch than was fiscally prudent at the time (we were risking future payroll) because we realized that we had one chance to tell the world that Zite was awesome. This paid off in spades: On launch day, we had print articles in The Wall Street Journal and USA Today, plus dozens of other fantastic pieces. This yielded us top billing in the App Store for free applications, and 125,000 downloads in the first week.
- Put your best foot forward — We focused much of our product design on the first minutes of the user experience. We knew that if a user never saw our amazing personalization technology, we’d lose them, and they’d think we were just a “me-too” news reader. We put our technology front and center by designing a simple, intuitive set-up experience that yielded immediate delight and serendipity.
- Have technology that is incredibly difficult to replicate — You’re not going to get bought if the acquiring company thinks they can build the product themselves. Zite had the advantage of almost six years of R&D (we were formerly called Worio), but until we became Zite, we were a technology company with a product problem. Instead of continuing to use the technology on a failed product, we pivoted to Zite. We also seized the opportunity to launch on the iPad, which is the perfect delivery device for the technology.
- Have a clear vision — We had a vision to change the way people consume information. Zite (the product) and personalization are components of that vision, but we proved that we were not a one-trick pony, and we were excited about innovating on news delivery.
- Disrupt the market — CNN noticed Zite after we received a cease-and-desist from major media companies, including Time Inc. (which is a cousin of CNN, since both are owned by Time Warner). My boss jokes, “If all of the media companies were able to get their lawyers to send you a letter, then you must be doing something right.” At the time, we weren’t sure how we would work with publishers, and publishers weren’t sure of the value of Zite. We’re now on solid ground with publishers, since they have realized the value of Zite as a discovery engine — but at the time it was a great boost to our visibility among the exact same executives who would later give us an offer for the company.
I want to stress that none of the above points are a guarantee that your company will get acquired — let alone be successful — but they certainly influenced CNN’s decision to buy Zite and, ultimately, our success to date. Look for ways you can integrate these tips into your start-up, and even if you aren’t acquired quickly, you will certainly build a better long-term offering for whatever market you choose to address.
Mark Johnson is CEO of Zite. He was an adviser to the company for almost two years, prior to taking the CEO role. He brings a strong product and technology background, with experience at several successful search start-ups: Powerset (natural-language search, acquired by Microsoft), Kosmix (categorized search, acquired by Walmart), and SideStep (travel search, acquired by Kayak). Most recently, he led product at Bing in San Francisco.
<< Previous | Next >>
OIL FIELDS #27
Bakersfield, California, USA, 2004
Photo: Edward Burtynsky
There’s no doubt that Edward Burtynsky’s photos from his Oil series are best viewed as enormous prints on a gallery wall. Known as one of the preeminent projects about the industrial age, the photos rely on scale to deliver their message about how oil has changed both the earth and human kind in profound and lasting ways.
That’s why we were skeptical when we heard he was releasing a new iPad version of the project’s book, which was originally published in 2009. How would these prints translate to a backlit viewing platform smaller than a sheet of office paper?
With app in hand, we were able to confirm the obvious — the iPad will never replace a print on the wall or a well-designed photo book. But that said, what we lost in scale and tactility was made up at least in part by the other features we’ve all come to love about the iPad.
Case-in-point are the short interviews with Burtynsky that accompany 24 of the photos. I enjoy a piece of art more when I know something about it and hearing Burtynsky explain things that you wouldn’t find in a normal caption — like why he composed certain photos in very particular ways — enriched the experience.
Other features on the app include three videos of Burtynsky speaking about his work and maps that show the location of the photos. There are also nine new images from the Gulf oil spill.
What tips the scales in favor of the app is the price. The Oil book sells for $128 on the publisher’s website. We can just imagine how much a Burtynsky print sells for. So at $9.99 there’s not much room to complain. If you enjoy Burtynsky’s work, it’s a drop in the bucket to experience a project that will only get more important as time goes on.
John Vink's new iPad app, "Quest for Land," documenting the struggles of poor Cambodians facing land-grabs and illegal evictions, is unbound by the finite restrictions of a printed book, allowing thousands of images to tell a fuller story.
luanna is a beautiful new application out of Tokyo-based visual/sound collective Phontwerp_. Amidst a wave of audiovisual iPad toys, luanna is notable for its elegance, connecting swirling flurries of particles with gestures for manipulation. I imagine I’m not alone when I say I have various sample manipulation patches lying around, many in Pd, lacking visualization, and wonder what I might use in place of a knob or fader to manipulate them. In the case of luanna, these developers find one way of “touching” the sound.
luanna is an audio-visual application designed for the iPad
that allows you to create and control music through the manipulation of moving images.
The luanna app has been designed to be visually simple and intuitive, whilst retaining a set of rich and comprehensive functions. Through hand gestures you can touch, tap and manipulate the image, as if you were touching the sound. The image changes dynamically with your hand movements, engaging you with the iPad’s environment.
The interface is multi-modal, with gestures activating different modes. This allows you to select samples, play in reverse, swap different playback options, mute, and add a rhythm track or crashing noises. It’s sort of half-instrument, half-generative.
Phontwerp_ themselves are an interesting shop, descibed as a “unit” that will “create tangible/intangible products all related to sound.” Cleverly naming each as chord symbols, ∆7, -7, add9, and +5 handle sound art, merch, music performance / composition / sound design, and code, respectively. That nexus of four dimensions sounds a familiar one for our age.
Sadly, this particular creation is one of a growing number of applications that skips over the first-generation iPad and its lower-powered processor and less-ample RAM. Given Apple can make some hefty apps run on that hardware, though, I hope that if independent developers find success supporting the later models, they back-port some of their apps.
See the tutorial for more (including a reminder that Apple’s multitasking gestures are a no-no).
US$16.99 on the App Store. (Interested to see the higher price, as price points have been low for this sort of app – but I wonder if going higher will eventually be a trend, given that some of the audiovisual stuff we love has a more limited audience!)
Readers request Audio Copy and sample import right away. I think sample import, at least, could easily justify a higher price, by making this a more flexible tool.
Find it on our own directory, CDM Apps:
Very similar in its approach is the wonderful Thicket, well worth considering:
See our recent, extensive profile of that application’s development:
Thicket for iOS Thickens; Artists Describe the Growth of an Audiovisual Playground
See also, in a similar vein, Julien Bayle’s recent release US$4.99 Digital Collisions:
How do you visualize the invisible? How do expose a process with multiple parameters in a way that’s straightforward and musically intuitive? Can messing about with granular sound feel like touching that sound – something untouchable?
Music’s ephemeral, unseeable quality, and the ways we approach sound in computer music in similarly abstract ways, are part of the pleasure of making noise. But working out how to then design around that can be equally satisfying. That’s why it’s wonderful to see work like the upcoming Borderlands for iPad and desktop. It solves a problem familiar to computer users – designing an interface for a granular playback instrument – but does so in a way that’s uncommonly clear. And with free code and research sharing, it could help inspire other projects, too.
Its creator also reminds, us, though, that the impetus for all of this can be the quest for beautiful sound.
Creator Chris Carlson is publishing source code and a presentation for the NIME [New Interfaces for Musical Expression] conference. But this isn’t just an academic problem or a fun design exercise: he also uses this tool in performance, so the design is informed by those needs. (I’m especially attuned to this particular problem, as I was recently mucking about with a Pd patch of mine that did similar things, working out how to perform with it and what the interface should look like. I know I’m not alone, either.)
The basic function of the app: load up a selection of audio clips, and the software distributes them graphically in the interface. Next:
A “grain cloud” may be added to the screen under the current mouse position with the press of a key. This cloud has an internal timing system that triggers individual grain voices in sequence. The user has control over the number of grain voices in a cloud, the overlap of these grains, the duration, the pitch, the window/envelope, and the extent of random motion in the XY plane. By selecting a cloud and moving it over a rectangle, the sound contained in the rectangle will be sampled at the relative position of each grain voice as it is triggered. By moving the cloud in along the dimension of the rectangle that is orthogonal to the time dimension, the amplitude of the resulting grain bursts changes.
An extended demo shows in greater detail how this all works:
Chris is a second-year Master’s student at Stanford University’s Center for Computer Research in Music and Acoustics [CCRMA] in California. The iPad version is coming soon, but you can get started with the Linux and Mac versions right away, and even join a SoundCloud group to share what you’re making. You’ll find all the details, and links to source code, on the CCRMA site. (And if someone feels like building this on Windows, you can save Chris the trouble.)
I also love this Max Mathews quote Chris shares as inspiration:
Max Mathews, in a lecture delivered at Stanford in the fall of 2010
“Any sound that the human ear can hear can be made by a sequence of digits. And that’s a true theorem. Most of the sounds that you make, shall we say randomly are either uninteresting, or horrible, or downright dangerous to your hearing. There’s an awful lot to be learned on how to make sounds that are beautiful.”
Beyond the technology, beyond this design I admire, anything that sends you on the path to making beautiful sound seems to be a worthy exercise. It’s a challenge you can face every day and never grow tired.
http://modulationindex.com/ [Chris' site, with more information]
Thanks to Ingmar Koch (Dr. Walker) for the tip!
Through the eyes of satellites, roving Google trucks, aerial imagery, and more, we have plenty of eyes on our planet. But what does it sound like here on Earth?
In a Web application and accompanying art installation, the world turns as it echoes sounds recorded around the world on Creative Commons-licensed site Freesound.org. It’s stunning to hear our world’s acoustic diversity – in some strange way, even more than seeing it, in that sounds can instantly give you a sense of place and time. You can load a version on your browser or on the iPad; then, from the world’s cities, listen as sounds mix automatically from one locale to another in an ambient sound score.
The basic notion is something we see repeated regularly, even with this visualization; this is a fantasy those of us who work in sound routinely entertain. But it’s doubly worth mentioning, in that it’s an excuse to mention the lovely Japanese label/artist/laboratory 43d.
43d engages sound through a variety of tools. In the 43d laboratory, the spinning Earth interface finds its way into an installation (video below), iPad app, and browser app, as workshops send participants into the field to listen to their environment and gather more sounds. Such exercises have an added bonus for us electronic musicians, of course, as collected sounds can easily become the raw materials of music in any genre through the wonderful alchemy of our machines.
The installation and sound mix project:
“World Sound Mix for BankART LIFE3″ is a sound visual installation, generating new soundscape around the world. This work continues mixing the sounds at selected two points somewhere in the world from the database of huge quantities of environment sounds and generating new soundscape.
For this exhibition, we set up a magic box that resonates mixed soundscape in Sapporo and somewhere in the world. During the exhibition, a globe in the box keeps turning and resonating sounds in real time.
About sounds data:
World Sound Mix is based on a sound database from Freesound project, its sounds have been recorded and gathered by sound hunters around the world. The use of sound data is under the CreativeCommons Sampling+ 1.0 License. By the username and “freesound sound ID” shown on the globe, listener can refer to original content.
Freesound.org, a terrific source of sounds:
But what I especially like about all of this is that the environmental sounds don’t have to exist in a vacuum. 43d is also an ambient music label, the work of artist Junichi Oguro:
A sound artist who widens the realm of music. Born in Sapporo in 1974.
He started to compose music since his childhood, and received a grand prize at a national contest. In 2006 he visited Berlin for making music in various fields from commercial music for TV spots to sound space design in various areas of Europe. He also showcases sound art pieces in the realm of the contemporary art. He manages an ambient label “43d” which was established for creating leading edge sounds.
The just-released “Unfield” is breathtaking, turning effortlessly from rough-shod digital glitches to icy-sweet ballads and intimate, gorgeous vocals by Malloy Nagasawa. It combines custom software and control with more conventional recording techniques:
Have a listen:
Hope to hear more from this whole project.
The teaser trailer for Warballoon Games’ Star Command has landed, giving us a glimpse into the fantastic pixel art of their upcoming spaceship-management game.
Star Command allows players to take control of a starship in humanities distant future. Players build their ship, staff and manage their crew, explore the galaxy, battle other species, discover far off worlds and attempt to control the universe.
The game is scheduled to launch this summer on iOS and Android devices, but the developers have plans for a future, “Ultimate” PC version as well, which would include “all the campaigns, all the expansions, [and] possible multiplayer.” I can not wait!
An Ars story from earlier this month reported that iPhones expose the unique identifiers of recently accessed wireless routers, which generated no shortage of reader outrage. What possible justification does Apple have for building this leakage capability into its entire line of wireless products when smartphones, laptops, and tablets from competitors don't? And how is it that Google, Wigle.net, and others get away with publishing the MAC addresses of millions of wireless access devices and their precise geographic location?
Some readers wanted more technical detail about the exposure, which applies to three access points the devices have most recently connected to. Some went as far as to challenge the validity of security researcher Mark Wuergler's findings. "Until I see the code running or at least a youtube I don't believe this guy has the goods," one Ars commenter wrote.
According to penetration tester Robert Graham, the findings are legit.
Pure Data (Pd) is already a free, convenient tool for making synths, effects, and sequencers and other musical generators. But imagine stripping away all the things that tie it to a platform – UI, specific hardware support – so it will run just about anywhere, on anything, in any context.
That’s what libpd, a free, embeddable, open source (BSD) tool for making interactive music, does. Coders can take their favorite language and their favorite platform, and just plug in the power of Pd. They don’t even have to know almost anything about Pd – they can let an intrepid Pd patcher create the interactive sound effects and dynamic music for their game and just drop a patch into their assets.
One of the most powerful applications for this is the ability to add interactive music and sound to mobile apps, on iOS and Android, without writing and testing a bunch of custom DSP code. And that has enabled the use of libpd in apps as successful as Inception: The App. With music by Hans Zimmer and a custom “dream” experience created by RjDj, that app racked up millions of downloads in under a couple of months, and then, far from sitting idle on the app launch screen, went on to clock in over a century of user “dreamtime.”
Okay, so, you’re sold. You want to see what this thing can do, and maybe try it out, and you’re wondering where to start. So, here’s some good news: there’s a new site and a new book to get you going.
The site: libpd.cc
libpd has a new home on the Web, both in the form of a new GitHub repository to organize all the code and docs and samples, and a site that brings together a showcase of what the apps does and points you to where to learn more. The single destination is now hosted here by CDM:
I built that site, so please, if there’s anything you’d like to see or you’ve got your own work created with libpd, let me know about it.
Even just having selected a few key highlights of apps built with libpd, it’s impressive what people are already doing with this tool:
The book, and a chat with its author
A new book published by O’Reilly focuses on building mobile apps using libpd, for iOS and Android. (iPhone, iPod touch, Android phones and tablets, and yes, even that “new iPad” introduced yesterday are therefore all fair game.)
You can read a section of the book right here on CDM, for a taste of what’s in store:
How to Make a Music App for iOS, Free, with libpd: Exclusive Book Excerpt
It’s an exceptional, comprehensive look at development using libpd, covering iOS and Android, but also a complete look at the libpd API and how to use it. For Pd patchers just getting started with iOS and Android, it includes all of the basics of how to use libpd in your mobile development environment. For mobile developers new to Pd and patching, it makes clear how you’d communicate with Pd, so you can either dive into Pd yourself or properly interface with patches made by musicians, composers, and sound designers with whom you may be collaborating. It’s an ideal title for anyone interested in taking a game and giving it a more dynamic soundtrack – in sound effects, music or both – or for people building mobile musical instruments and effects, sonic toys, interactive albums, or, really, anything at all that involves sound or music. Since it walks you through the entire development experience, you can sit down with it in the course of a few evenings, and get a complete picture of how to integrate Pd with your development workflow.
Dr. Peter Brinkmann, the principal developer of libpd, is the author of the title. I asked Peter to explain a little bit about the book, who it’s for (hint: you!), and what’s in it (hint: stuff you want to read!) …
CDM: How did this book come about? And the book process really helped drive improvements to libpd, too?
Peter B.: Shawn Wallace, an editor at O’Reilly, contacted me last summer and asked whether I would be interested in writing a short book on libpd. I was interested, and so I talked to my [Google] manager (“No conflict — we all have time-consuming hobbies!”) as well as a couple of colleagues who had written books for O’Reilly. They made a token attempt to dissuade me, but it was clear that they had enjoyed writing their books, and they seemed quite proud of the result, too.
Once I had made up my mind to write a book, the next question was whether to self-publish or go with O’Reilly. Self-publishing is a viable option these days, but then I decided that I really wanted an animal on the cover. Besides, I had never written a book before, and having the support of O’Reilly’s editorial staff made the prospect seem less daunting.
The first draft was done in mid-November, but at that time it was basically science fiction because it presented libpd the way I wanted it to be, not the way it was at the time.
So, after the bulk of the writing was done, libpd needed to be revised so that it would actually be in agreement with the book. In particular, Rich Eakin and I rewrote the iOS components for better performance and usability. That delayed the book by a month or so, which turned out to be a great stroke of luck because that was when I discovered that Xcode 4.2 had changed the entire development model by introducing automatic reference counting, instantly rendering existing
texts obsolete. That included my chapter on iOS, and so I had to sit down and rewrite it.
After that, the rest happened rather quickly — getting reviews, revising the draft, going through the production process. O’Reilly’s toolchain is remarkably efficient, using asciidoc and docbook in a Subversion repository. The editorial staff is great, too. I’m amazed to see how quickly it all came together.
How did you approach writing the book?
For the first draft, I just imagined that I was teaching a class on libpd. When you’re lecturing in front of an audience, you don’t have time to polish every sentence; you just have to talk and maintain some sense of momentum. That approach helps a lot when facing a blank page. After that, it’s many, many rounds of revisions to eliminate weak or redundant sentences.
For the sample code, I picked one project that uses all major components of libpd. That provided a natural progression from idea to completion, while touching on all important points in their proper context. I’m basically providing running commentary on my thought process when making an app, including common mistakes and pitfalls. Like this, readers will know how to recognize and work around most problems.
Another trick is to write more than necessary. The first draft contained a lot of gratuitous editorializing. Those parts were never meant to make it into the finished text, but they were fun to write and they kept me going when I wasn’t quite sure what to write next.
Who it’s for?
The book explains how to patch for libpd, and how to write apps with libpd, with special emphasis on the interface between Pd patches and application code. It’s for mobile developers who want to add real-time audio synthesis to their projects, as well as sound designers who want to deploy their work on mobile devices. It’s light on prerequisites; if you know how to write a basic app for Android or iOS, you’re ready to read the book.
Ed.: I’d add to that, given that there are such great tutorials on app development for Android and iOS – even many of them free, including some very worthwhile documentation from Google and Apple — if you’ve messed with Pd, you should give the book a try. And if you haven’t messed with Pd, this could be a great excuse. This book won’t teach you Pd, but it’ll make very clear how to glue everything together. -PK
Why does a book like this matter? What do you hope will come out of it?
I hope that the book will help popularize real-time procedural audio, in games and other apps. I’m thrilled to see all the projects that use libpd, and I hope that the book will help people create even more awesomeness of this kind. One thing I only fully realized when writing the book is that libpd lets developers use DSP code like a media file: An audio developer creates a Pd patch, and the app developer just drops it into the resources of the app and loads and
triggers it as needed. I guess this was implicit in a blog post I wrote on workflow and prototyping a year ago, but I think the DSP-as-media angle is even more powerful. I hope that the book will bring this out.
The book project has already improved libpd. Whenever I faced the choice between fixing an awkward bit of code or explaining the awkwardness in the book, I chose to fix the code. That took care of all the little things that were sort of bothering me but didn’t seem significant enough to spend time on. It also gave us a deadline for a number of related things that we wanted to do, such as migrating to GitHub and launching the new website, libpd.cc. Ed.: Cough. Yes, glad that gave me that deadline – and thanks to Peter B. for the extra push! -PK
Congrats to Peter on his first animal-on-a-cover! It’s really a great book: you read it, and feel like making more new things, inventing new creations that produce sound and music. And that’s a very good thing.