Skip navigation
Help

Wikipedia

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.
Original author: 
bolli

1714064427

Stéphane Breitwieser is a Frenchman notorious for his art thefts between 1995 and 2001. He admitted to stealing 239 artworks and other exhibits, worth an estimated US$1.4 billion (£960m), from 172 museums while travelling around Europe and working as a waiter, an average of one theft every 15 days. The Guardian called him “arguably the world’s most consistent art thief.”

http://en.wikipedia.org/wiki/St%C3%A9phane_Breitwieser

0
Your rating: None
Original author: 
Megan Geuss

A website built by two programmers, Stephen LaPorte and Mahmoud Hashemi, displays recent changes to Wikipedia in real-time on a map of the world. When a new change is saved to the crowd-sourced encyclopedia, the title of the edited article shows up on the map with the editor's location according to his or her IP address.

Not all recent changes are counted, however. Actually, the website only maps the contributions made by unregistered Wikipedia users. When such a user makes an edit, they are identified only by IP address. This is just as well—a similar website called Wikistream logs all changes to Wikipedia (although not in such a graphically-friendly way), and watching the flood of new entries can get overwhelming, fast.

LaPorte and Hashemi said they built their map using the JavaScript library D3, datamaps-world.js, a service for searching the geolocation of IP addresses called freegeoip.net, and Wikimedia's recent changes IRC feed. The two programmers note in their blog that “you may see some users add non-productive or disruptive content to Wikipedia. A survey in 2007 indicated that unregistered users are less likely to make productive edits to the encyclopedia.” Helpfully, when you see a change made to a specific article, you can click on that change to view how the page has been edited (and change it back if it merits more editing).

Read 3 remaining paragraphs | Comments

0
Your rating: None

How can we make sense of it all?
A few weeks ago, I had dinner with Saumil and Sailesh, co-founders of LocBox.* Instagram had just been acquired by Facebook and there was speculation (later confirmed) about a big up round financing of Path. The recent large financing of Pinterest was still in the air, and the ongoing parlor game of when Facebook would go public and at what price was still being played. A couple of months prior, Zynga had acquired OMGPOP.

Sailesh wondered aloud, “How much time do we have for any of these?” “How many of them can coexist?” and “Do we really need them?” My answers were, respectively: “A lot.” “Many of them.” and “No, but we want them.” That dinner discussion prompted some observations that I am outlining here, and I invite you to share your own observations in the comments below.

In a nutshell, the Internet has evolved from being a need-driven utility medium with only a handful of winners to a discovery-driven entertainment medium with room for multiple winners. The necessary and sufficient conditions for this evolution are now in place — broadband, real names and tablets are the three horsemen of this New New Web. As consumers, entrepreneurs and investors, we should get used to the fact that the online economy is increasingly blurring with the offline economy, and in the limit, that distinction will disappear. As a result, just as in the real world, the Web of entertainment will be much bigger than the Web of utility.

A Theory of Human Motivation
One framework for understanding the consumer Internet is Maslow’s Hierarchy of Needs, which Abraham Maslow put forward as a way of explaining human behavior at large. The core premise is that once our basic needs of food, shelter, safety and belonging are satisfied, we tend to focus on things that are related to creativity, entertainment, education and self-improvement. A key aspect of this framework is that it’s sequential: Unless the basic needs are met, one cannot focus on other things. As an example, a study in 2011 showed that humans who are hungry will spend more on food and less on non-food items compared to those who are not hungry. Using this framework, we can see how consumer adoption of the Web has evolved over the last 20 years, and why all of the ingredients are only now in place for consumers to use the Web for what Maslow called “self-actualization” — a pursuit of one’s full potential, driven by desire, not by necessity.

1992-2012: Web of Need
Between the AOL IPO in 1992 and the Facebook IPO last month, the Internet has largely been in the business of satisfying basic consumer needs. In 1995, the year Netscape went public and made the internet accessible to the masses, I was a young product manager for a consumer Internet company called Global Village Communication. We were a newly minted public company and our hottest product was a “high speed” fax/modem with a speed of 33.6 kbps. Back then, using the Internet as a consumer or making a living off it as a business was rather difficult, and sometimes simply frustrating. In the subsequent years the basic needs of access, browser, email, search and identity were solved by companies such as AOL, Comcast, Netscape, Yahoo, Google, LinkedIn and Facebook.

2012-?: Web of Want
Today, the billion users on Facebook have reached the apex of Maslow’s hierarchy on the web. All of our basic needs have been satisfied. Now we are in pursuit of self-actualization. It is no surprise that on the Web, we are now open to playing games (Zynga, Angry Birds), watching video (YouTube, Hulu), listening to music (Pandora, Spotify), expressing our creativity (Instagram, Twitter, Draw Something), window shopping (Pinterest, Gojee*) and pursuing education (Khan Academy, Empowered*).

The Web Is Becoming Like TV
How do we make sense out of a Web where multiple providers coexist, serving groups of people who share a similar desire? Turns out we already have a very good model for understanding how this can work: Television. Specifically, cable television. The Web is becoming like TV, with hundreds of networks or “channels” that are programmed to serve content to an audience with similar desires and demographics. Pinterest, ShoeDazzle, Joyous and Alt12* programmed for young, affluent women; Machinima, Kixeye and Kabam programmed for mostly male gamers; Gojee* for food enthusiasts; Triposo* for travellers; GAINFitness* for fitness fans and so on.

In this new new Web, an important ingredient to success is a clear understanding of the identity of your users to ensure that you are programming to that user’s interests. The good news is that unlike TV, the Web has a feedback loop. Everything can be measured and as a result the path from concept to success can be more capital efficient by measuring what type of programming is working every step of the way — it’s unlikely that the new new Web will ever produce a Waterworld.

Why Now? Broadband, Real Names & Tablets
As my partner Doug Pepper recently wrote, a key question when evaluating a new opportunity is to ask “Why Now?” Certainly, companies like AOL, Yahoo and Myspace have tried before to program the Web to cater to interests of specific audiences. What’s different now? Three things: Broadband, real names and tablets.

The impact of broadband is obvious; we don’t need or want anything on a slow Web. With broadband penetration at 26 percent in industrialized countries and 3G penetration at about 15 percent of the world’s population, we are just reaching critical mass of nearly 1B users on the fast Web.

Real names are more interesting. In 1993, the New Yorker ran the now famous cartoon; “On the Internet, nobody knows you’re a dog.” This succinctly captured the state of the anonymous Web at the time. Reid Hoffman and Mark Zuckerberg changed that forever. Do we find Q&A on Quora to be more credible than Yahoo! Answers, celebrity profiles on Twitter more engaging than Myspace and pins on Pinterest more relevant than recommendations on early AOL chatrooms? I certainly do, and that is largely because Quora, Twitter and Pinterest take advantage of real names. Real names are blurring the distinction between online and offline behavior.

Finally, the tablet, the last necessary and sufficient piece that fuels the “Web of want.” The PC is perfect for the “Web of need” — when we need something, we can search for it, since we know what we are looking for. Searching is a “lean-forward” experience, typing into our PC, either at work or at the home office. The Web over the last decade has been optimized for this lean-forward search experience — everything from SEO to Web site design to keyword shortcuts in popular browsers makes that efficient. However, smartphones and tablets allow us to move to a “lean-back” experience, flipping through screens using our fingers, often in our living rooms and bedrooms, on the train or at the coffee shop. Tablets make discovery easy and fun, just like flipping channels on TV at leisure. These discoveries prompt us to want things we didn’t think we needed.

Early Signs
This thesis is easy to postulate, but is there any evidence that users are looking to the Web as anything more than a productivity platform? As has been reported, mobile devices now make up 20 percent of all U.S. Web traffic, and this usage peaks in the evening hours, presumably when people are away from their office. Analysis from Flurry* shows that cumulative time spent on mobile apps is closing in on TV. We certainly don’t seem to be using the Web only when we need something.

Economy of Need Versus Want
The economy of Want is different from the economy of Need. We humans tend to spend a lot more time and money on things we want compared to things we need. For example, Americans spend more than five hours a day on leisure and sports (including TV), compared to about three hours spent on eating, drinking and managing household activities. Another difference is that when it comes to satisfying our needs, we tend to settle on one provider and give that one all of our business. Think about how many companies provide us with electricity, water, milk, broadband access, search, email and identity. The Need economy is a winner-take-all market, with one or two companies dominating each need. However, when it comes to providing for our wants, we are open to being served by multiple providers. Think about how many different providers are behind the TV channels we watch, restaurants we visit, destinations we travel to and movies we watch. The Want economy can support multiple winners, each with a sizeable business. Instagram, Path, Pinterest, ShoeDazzle, BeachMint, Angry Birds, CityVille, Kixeye, Kabam, Machinima and Maker Studios can all coexist.

Investing in the Web of Want
The chart below shows that over a long term (including a global recession) an index of luxury stocks (companies such as LVMH, Burberry, BMW, Porsche, Nordstrom) outperforms an index of utility stocks (companies such as Con Edison and Pacific Gas & Electric that offer services we all need). The same applies to an index of media stocks (companies such as CBS, Comcast, News Corp., Time Warner, Viacom) which outperforms both the utilities and the broader stock market. Of course, higher returns come with higher volatility — Nordstrom’s beta is 1.6 and CBS’ beta is 2.2, compared to 0.29 for PG&E. It is this volatility that has cast investing in the Want business as a career-ending move in Silicon Valley for the past 20-plus years. As the Web evolves from serving our needs to satisfying our wants and, in turn, becomes a much larger economy, sitting on the sidelines of the Web of Want may not be an option.

Let’s Not Kill Hollywood
With a billion users looking for self-actualization and with the widespread adoption of broadband, real names and tablets, the Web is poised to become the medium for creativity, education, entertainment, fashion and the pursuit of happiness. As the offline world shows, large, profitable companies can be built that cater to these desires. Entrepreneurs and investors looking to succeed in the new new Web can learn quite a few lessons from our friends in the luxury and entertainment businesses, which have been managing profitable “want” businesses for decades. The fusion of computer science, design, data, low friction and the massive scale of the Internet can result in something that is better than what either Silicon Valley or Hollywood can do alone. It is no wonder that the team that came to this conclusion before anyone else is now managing the most valuable company in the world.

Epilogue
When we go see a movie or splurge on a resort vacation, we don’t stop using electricity, brushing our teeth or checking our email. The Web of Want is not a replacement for the Web of Need, it is an addition. Many of the Internet companies that satisfied our needs in the last 20 or more years of the Web are here to stay. In fact, they will become more entrenched and stable, with low beta, just like the utilities in the offline world. Microsoft has a beta of exactly 1.0 — it is no more volatile than the overall stock market. And for those longing for the days of “real computer science” on the Web, do not despair. Just keep an eye on Rocket Science and Google X Labs — there is plenty of hard-core engineering ahead.

Disclosures: * indicates an InterWest portfolio company. Google Finance was used for all of the stock charts and beta references.

Keval Desai is a Partner at InterWest, where he focuses on investments in early-stage companies that cater to the needs and wants of consumers. He started his career in Silicon Valley in 1991 as a software engineer. He has been a mentor and investor in AngelPad since inception. You can follow him @kevaldesai.

0
Your rating: None

Social learning start-up Grockit launched Learnist on Thursday, a stand-alone product separate from Grockit’s flagship online collaborative test preparation service.

Learnist builds on Grockit’s social teaching concept while adding another element that company founder Farb Nivi claims is crucial: The visual element.

Think of Learnist as something of a mashup between Pinterest and Wikipedia. Users find content from across the Web — videos, news stories, music, Soundcloud links and what have you — and post it to a personal board that other users can follow. It’s ideal, Nivi says, for teachers who want to curate multimedia lessons for students to follow, though without the feel of a stodgy, traditional lesson plan.

Grockit has already garnered a following, attracting more than a million users through its test prep service product alone. But the user pool for online test prep is only so large. With Grockit’s wiki-like Learnist product, the company hopes to spread its reach far beyond the cramming crowd. Facebook integration, which the product indeed has, will only help to further the cause.

Naturally, iPad and iPhone apps are in the works, to be expected in the coming weeks.

0
Your rating: None

Mapping Wikipedia

TraceMedia, in collaboration with the Oxford Internet Institute, maps language use across Wikipedia in an interactive, fittingly named Mapping Wikipedia.

Simply select a language, a region, and the metric that you want to map, such as word count, number of authors, or the languages themselves, and you've got a view into "local knowledge production and representation" on the encyclopedia. Each dot represents an article with a link to the Wikipedia article. For the number of dots on the map, a maximum of 800,000, it works surprisingly without a hitch, other than the time it initially takes to load articles.

This is part of a larger body of work from Mark Graham and Bernie Hogan, et. al, which focuses mostly on the gaps, specifically in the Middle East and North Africa.

There are obvious gaps in access to the Internet, particularly the participation gap between those who have their say, and those whose voices are pushed to the sidelines. Despite the rapid increase in Internet access, there are indications that people in the Middle East and North Africa (MENA) region remain largely absent from websites and services that represent the region to the larger world.

[via FloatingSheep]

0
Your rating: None

wikimedia-logo2

Wikidata, the first new project to emerge from the Wikimedia Foundation since 2006, is now beginning development. The organization, known best for its user-edited encyclopedia of knowledge Wikipedia, recently announced the new project at February’s Semantic Tech & Business Conference in Berlin, describing Wikidata as new effort to provide a database of knowledge that can be read and edited by humans and machines alike.

There have been other attempts at creating a semantic database built from Wikipedia’s data before – for example, DBpedia, a community effort to extract structured content from Wikipedia and make it available online. The difference is that, with Wikidata, the data won’t just be made available, it will also be made editable by anyone.

The project’s goal in developing a semantic, machine-readable database doesn’t just help push the web forward, it also helps Wikipedia itself. The data will bring all the localized versions of Wikipedia on par with each other in terms of the basic facts they house. Today, the English, German, French and Dutch versions offer the most coverage, with other languages falling much further behind.

Wikidata will also enable users to ask different types of questions, like which of the world’s ten largest cities have a female mayor?, for example. Queries like this are today answered by user-created Wikipedia Lists – that is, manually created structured answers. Wikidata, on the hand, will be able to create these lists automatically.

The initial effort to create Wikidata is being led by the German chapter of Wikimedia, Wikimedia Deutschland, whose CEO Pavel Richter calls the project “ground-breaking,” and describes it as “the largest technical project ever undertaken by one of the 40 international Wikimedia chapters.” Much of the early experimentation which resulted in the Wikidata concept was done in Germany, which is why it’s serving as the base of operations for the new undertaking.

The German Chapter will perform the initial development involved in the creation of Wikidata, but will later hand over the operation and maintenance to the Wikimedia Foundation when complete. The estimation is that hand-off will occur a year from now, in March 2013.

The overall project will have three phases, the first of which involves creating one Wikidata page for each Wikipedia entry across Wikipedia’s over 280 supported languages. This will provide the online encyclopedia with one common source of structured data that can be used in all articles, no matter which language they’re in. For example, the date of someone’s birth would be recorded and maintained in one place: Wikidata. Phase one will also involve centralizing the links between the different language versions of Wikipedia. This part of the work will be finished by August 2012.

In phase two, editors will be able to add and use data in Wikidata, and this will be available by December 2012. Finally, phase three will allow for the automatic creation of lists and charts based on the data in Wikidata, which can then populate the pages of Wikipedia.

In terms of how Wikidata will impact Wikipedia’s user interface, the plan is for the data to live in the “info boxes” that run down the right-hand side of a Wikipedia page. (For example: those on the right side of NYC’s page). The data will be inputted at data.wikipedia.org, which will then drive the info boxes wherever they appear, across languages, and in other pages that use the same info boxes. However, because the project is just now going into development, some of these details may change.

Below, an early concept for Wikidata:

All the data contained in Wikidata will be published under a free Creative Commons license, which opens it up for use by any number of external applications, including e-government, the sciences and more.

Dr. Denny Vrandečić, who joined Wikimedia from the Karlsruhe Institute of Technology, is leading a team of eight developers to build Wikidata, and is joined by Dr. Markus Krötzsch of the University of Oxford. Krötzsch and Vrandečić, notably, were both co-founders of the Semantic MediaWiki project, which pursued similar goals to that of Wikidata over the past few years.

The initial development of Wikidata is being funded through a donation of 1.3 million Euros, granted in half by the Allen Institute for Artificial Intelligence, an organization established by Microsoft co-founder Paul Allen in 2010. The goal of the Institute is to support long-range research activities that have the potential to accelerate progress in artificial intelligence, which includes web semantics.

“Wikidata will build on semantic technology that we have long supported, will accelerate the pace of scientific discovery, and will create an extraordinary new data resource for the world,” says Dr. Mark Greaves, VP of the Allen Institute.

Another quarter of the funding comes from the Gordon and Betty Moore Foundation, through its Science program, and another quarter comes from Google. According to Google’s Director of Open Source, Chris DiBona, Google hopes that Wikidata will make large amounts of structured data available to “all.” (All, meaning, course, to Google itself, too.)

This ties back to all those vague reports of “major changes” coming to Google’s search engine in the coming months, seemingly published far ahead of any actual news (like this), possibly in a bit of a PR push to take the focus off the growing criticism surrounding Google+…or possibly to simply tease the news by educating the public about what the “semantic web” is.

Google, which stated it would be increasing its efforts at providing direct answers to common queries – like those with a specific, factual piece of data – could obviously build greatly on top of something like Wikidata. As it moves further into semantic search, it could provide details about the people, places and things its users search for. It would actually know what things are, whether birth dates, locations, distances, sizes, temperatures, etc., and also how they’re connected to other points of data. Google previously said it expects semantic search changes to impact 10% to 20% of queries. (Google declined to provide any on the record comment regarding its future plans in this area).

Ironically, the results of Wikidata’s efforts may then actually mean fewer Google referrals to Wikipedia pages. Short answers could be provided by Google itself, positioned at the top of the search results. The need to click through to read full Wikipedia articles (or any articles, for that matter) would be reduced, leading Google users to spend more time on Google.

0
Your rating: None

011010tuskegee

From 1963 to 1969 as part of Project Shipboard Hazard and Defense (SHAD), the U.S. Army performed tests which involved spraying several U.S. ships with various biological and chemical warfare agents, while thousands of U.S. military personnel were aboard the ships. The personnel were not notified of the tests, and were not given any protective clothing. Chemicals tested on the U.S. military personnel included the nerve gases VX and Sarin, toxic chemicals such as zinc cadmium sulfide and sulfur dioxide, and a variety of biological agents.

http://en.wikipedia.org/wiki/Human_experimentation_in_the_United_States

0
Your rating: None