Skip navigation
Help

online presence

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.

Fifteen years ago, you weren't a participant in the digital age unless you had your own homepage. Even in the late 1990s, services abounded to make personal pages easy to build and deploy—the most famous is the now-defunct GeoCities, but there were many others (remember Angelfire and Tripod?). These were the days before the "social" Web, before MySpace and Facebook. Instant messaging was in its infancy and creating an online presence required no small familiarity with HTML (though automated Web design programs did exist).

Things are certainly different now, but there's still a tremendous amount of value in controlling an actual honest-to-God website rather than relying solely on the social Web to provide your online presence. The flexibility of being able to set up and run anything at all, be it a wiki or a blog with a tipjar or a photo hosting site, is awesome. Further, the freedom to tinker with both the operating system and the Web server side of the system is an excellent learning opportunity.

The author's closet. Servers tend to multiply, like rabbits. Lee Hutchinson

It's super-easy to open an account at a Web hosting company and start fiddling around there—two excellent Ars reader-recommended Web hosts are A Small Orange and Lithium Hosting—but where's the fun in that? If you want to set up something to learn how it works, the journey is just as important as the destination. Having a ready-made Web or application server cuts out half of the work and thus half of the journey. In this guide, we're going to walk you through everything you need to set up your own Web server, from operating system choice to specific configuration options.

Read 90 remaining paragraphs | Comments

0
Your rating: None

You may have heard that our Web FontFonts are now supported by 98% of all desktop browsers. With a tantalising typographical treasure trove of 2240 Web FontFonts, it's sometimes tricky to decide which web font is the best fit for your online brand presence. To provide a little inspiration and help you choose, we've brought together a selection of in-use cases of our top ten most popular web fonts that have caught our eye recently.

FF Meta & FF Meta Serif

Parse by HowThe marvellous FF Meta and FF Meta Serif, Erik Spiekermann's No-Brainer, feature on this great site Parse by How. Parse is a real smörgåsbord of design content; they scour the web to bring together what they call design 'tapas for the brain'.

FF DIN

FF DIN in-use

One of our bestsellers and a real classic typeface, FF DIN, features on the Budget 4 Change website. The thin horizontal strokes and fluent curves of FF DIN provide a sober and solid tone to the site which is dedicated to mapping, tracking and analyzing donor government budgets against official development assistance.

FF Unit & FF Unit Slab

Typolution

Evolution, Revolution, Solution. That is the simple philosophy behind Typolution, the 'purely' typographical website that covers the latest developments, innovations and advancements in the industry (all in German). The site uses our very own FF Unit for the body text and FF Unit Slab for the headers, offering a cool yet disciplined tone.

FF Dax

Juventus Member

FF Dax adorns the official team fan page of the football club Juventus, the site is the go-to resource for Juventus fans to join the community of supporters and buy a subscription.

FF Scala & FF Scala Sans

VRB

The website for the VRB (Vorratsgesellschaft) organization based in Germany is set in one of the bestselling and most serious text faces, the formidable FF Scala and FF Scala Sans. The VRB offers 'off the' Shelf Companies and legal advice.

FF Tisa & FF Tisa Sans

Espen Dreyer

Espen Dreyer is a Norwegian freelance photographer and his website packed full of lovely snapshots. Set in one of our most prolific Web FontFonts, FF Tisa and the recently released FF Tisa Sans.

FF Kievit

Blossom

The new start-up Blossom, offers a brand new product management tool and features Mike Abbink's FF Kievit which offers a clean and open touch to their site.

FF Clan

Red Bull Academy Radio

The website of the Red Bull Music Academy Radio really packs a punch with the use of the strong and powerful FF favorite, FF Clan which was designed by Łukasz Dziedzic.

FF Fago

Regmodharz

The mighty fine FF Fago, one of our super families designed by Ole Schäfer, provides a conscious corporate air to the online presence of the green energy project, RegModHarz.

FF Dagny

Two Arms Inc.

Two Arms Inc are a team of two, who combine illustration and design in a delightful manner. Based in Brooklyn they are famed for their passion for screenprinting. Their website employs FF Dagny, by Örjan Nordling and Göran Söderström. Great minds think alike, as we use it on our site too!

We've recently received some lovely examples of FontFonts in-use. Keep 'em coming! If you've used a FF in a recent project and you'd like to be featured on our site, please email lucy@fontfont.de.

via FontFont News Feed http://feedproxy.google.com/~r/Fontfont/~3/3CRV8NxKQ8o/top-10-webfonts

0
Your rating: None

  

In my nearly two decades as an information architect, I’ve seen my clients flush away millions upon millions of dollars on worthless, pointless, “fix it once and for all” website redesigns. All types of organizations are guilty: large government agencies, Fortune 500s, not-for-profits and (especially) institutions of higher education.

Worst of all, these offending organizations are prone to repeating the redesign process every few years like spendthrift amnesiacs. Remember what Einstein said about insanity? (It’s this, if you don’t know.) It’s as if they enjoy the sensation of failing spectacularly, publicly and expensively. Sadly, redesigns rarely solve actual problems faced by end users.

I’m frustrated because it really doesn’t have to be this way. Let’s look at why redesigns happen, and some straightforward and inexpensive ways we might avoid them.

The Diagnostic Void

Your users complain about your website’s confounding navigation, stale content, poor usability and other user experience failures. You bring up their gripes with the website’s owners. They listen and decide to take action. Their hearts are in the right place. But the wheels quickly come off.

Most website owners don’t know how to diagnose the problems of a large complex website. It’s just not something they were ever taught to do. So, they’re put in the unfortunate, uncomfortable position of operating like country doctors who’ve suddenly been tasked to save their patients from a virulent new pandemic. It is their responsibility, but they’re simply unprepared.

Sadly, many website owners fill this diagnostic void — or, more typically, allow it to be filled — with whatever solution sounds best. Naturally, many less-than-ethical vendors are glad to dress up their offerings as solutions to anyone with a problem — and a budget. The tools themselves (search engines, CMS’, social apps) are wonderful, but they’re still just tools — very expensive ones, at that — and not solutions to the very specific problems that an organization faces. Without proper diagnostics to guide the configuration of tools, any resulting improvements to the user experience will be almost accidental.

Sometimes design agencies are brought in to fill the diagnostic void. And while not all agencies are evil, a great many follow a business model that depends on getting their teams to bill as many hours as they can and as soon as possible. Diagnostics can slow the work down (which is why clients rarely include a diagnostic phase in their RFPs). So, many agencies move to make a quick, tangible impression (and make their clients happy) by delivering redesigns that are mostly cosmetic.

A pretty face can last only a few years, but by then the agency is long gone. Invariably, the new owner wishes to make their mark by freshening or updating the website’s look. And another agency will be more than happy to oblige. Repeat ad nauseam, and then some.

Oh, and sometimes these redesigns can be pricey. Like $18 million pricey.

See why I’m so grouchy?

Forget the Long Tail: The Short Head Is Where It’s At

Whether you’re a designer, researcher or website owner, I’ve got some good news for you: diagnostics aren’t necessarily difficult or expensive. Better yet, you’ll often find that addressing the problems you’ve diagnosed isn’t that hard.

And the best news? Small simple fixes can accomplish far more than expensive redesigns. The reason? People just care about some stuff more than they care about other stuff. A lot more. Check this out and you’ll see:

This hockey-stick-shaped curve is called a Zipf curve. (It comes from linguistics: Zipf was a linguist who liked to count words… but don’t worry about that.) Here it is in dragon form, displaying the frequency of search queries on a website. The most frequently searched queries (starting on the left) are very, very frequent. They make up the “short head.” As you move to the right (to the esoteric one-off queries in the “long tail”), query frequency drops off. A lot. And it’s a really long tail.

This is absolutely the most important thing in the universe. So, to make sure it’s absolutely clear, let’s make the same point using text:

Query’s rank
Cumulative %
Query’s frequency
Query

1
1.40%
7,218
campus map

14
10.53%
2,464
housing

42
20.18%
1,351
web enroll

98
30.01%
650
computer center

221
40.05%
295
msu union

500
50.02%
124
hotels

7,877
80.00%
7
department of surgery

In this case, tens of thousands of unique queries are being searched for on this university website, but the first one accounts for 1.4% of all search traffic. That’s massive, considering that it’s just one query out of tens of thousands. How many short-head queries would it take to get to 10% of all search traffic? Only 14 — out of tens of thousands. The 42 most frequent queries cover over 20% of the website’s entire search traffic. About a hundred gets us to 30%. And so on.

It’s Zipf’s World; We Just Live in It

This is very good news.

Want to improve your website’s search performance? Don’t rip out the search engine and buy a new one! Start by testing and improving the performance of the 100 most frequent queries. Or, if you don’t have the time, just the top 50. Or 10. Or 1 — test out “campus map” by actually searching for it. Does something useful and relevant come up? No? Why not? Is the content missing or mistitled or mistagged or jargony or broken? Is there some other problem? That, folks, is diagnostics. And when you do that with your website’s short head, your diagnostic efforts will go a very long way.

The news gets better: Zipf is a rule. The search queries for all websites follow a Zipf distribution.

And the news gets even jump-up-and-down-and-scream-your-head-off better: Zipf is true not only for your website’s search queries. Your content works the same way! A small subset of your website’s content does the heavy lifting. Much of the rest has little or no practical value at all. (In fact, I’ve heard a rumor that 90% of Microsoft.com’s content has never, ever been accessed. Not once. But it’s a just a rumor. And you didn’t hear it here.) Bottom line: don’t redesign all of your content — focus on the stuff that people actually need.

You’ll also see a short head when it comes to your website’s features. People need just a few of them; the rest are gravy.

And there’s more. Of all the audience types that your website serves, one or two matter far more than the others. What tasks do those audience types wish to accomplish on your website? A few are short-head tasks; the rest just aren’t that important.

As you can see, the Zipf curve is everywhere. And fortunately, the phenomenon is helpful: you can use it to prioritize your efforts to tweak and tune your website’s content, functionality, searchability, navigation and overall performance.

Your Website Is Not A Democracy

When you examine the short head — of your documents, your users’ tasks, their search behavior and so forth — you’ll know where to find the most important problems to solve. In effect, you can stop boiling the ocean…

Ocean

… and start prioritizing your efforts to diagnose and truly solve your website’s problems.

Now, let’s put these short-head ideas together. Below is a report card for an academic website that starts with the short head of its audience:

In other words, of all the audience types this university website has, the three most important are people who might pay money to the university (applicants,) people who are paying money now (students) and people who will hopefully pay money for the rest of their lives (alumni). How do we know they’re the most important audiences? We could go by user research; for example, the analytics might suggest that these audiences generate more traffic than anyone else. Or perhaps the university’s stakeholders believe that these are the most important ones in their influence and revenue. Or some combination of both. Whatever the case, these three audiences likely swamp all other segments in importance.

Then, we would want to know the short-head tasks and information needs of each audience type. We might interview stakeholders to see what they think (column 2). And we might perform research — user interviews and search analytics, for example — to find out what users say is most important to them (column 3).

Of course, as the good folks at xkcd demonstrate, stakeholders and users don’t always see things the same way:

That’s why talking to both stakeholders and users is important. And once you’ve figured out the short head for each, you’ll need to earn your salary and, through some careful negotiation, combine your takes on each audience type’s needs. That’s what we’ve done in column 4.

Finally, in column 5, we’ve tested each task or need and evaluated how well it works. (Because it’s a university-related example, letter grades seemed appropriate.) You can do this evaluation in an expensive, statistically significant way; but really, enough research is out there to suggest that you don’t need to spend a lot of time and money on such testing. More importantly, these needs and tasks are often fairly narrow and, therefore, easy to test.

So, after testing, we can see what’s not going well. Finding information on “mentoring” is hard for applicants. And current students have a devil of a time when they “look up grades.”

Now we’re done diagnosing the problems and can begin making fixes. We can change the title of the “Paired Guidance Program” page to “Mentoring.” We can create a better landing page for the transcript application. The hard part, diagnostics, is out of the way, and we can now fix and tune our website’s performance as much as our resources allow.

From Project To Process To Payoff

These fixes are typically and wonderfully small and concrete, but because they live in the short head, they make a huge and lovely impact on the user experience — at a fraction of the cost of a typical redesign.

The tuning process itself is quite simple. It’s what we used to arrive at the report card below:

If you repeat this simple process on a regular basis — say, every month or quarter — then you can head off the entropy that causes fresh designs and fresher content to go rotten. Thus, the redesign that your organization has scheduled for two years from now can officially be canceled.

Your website’s owners ought to be happy about all this. And you should be, too: rather than tackling the project of getting your website “right” — which is impossible — you can now focus on tweaking and tuning it from here on out. So, forget redesigns, and start owning and benefiting from a process of continual improvement.

Special Thanks – Illustrations

Eva-Lotta is a UX Designer and Illustrator based in London, UK where she currently works as an interaction designer at Google. Besides her daytime mission of making the web a more understandable, usable and delightful place, she regularly takes sketchnotes at all sorts of talks and conferences and recently self-published her second book. Eva-Lotta also teaches sketching workshops and is interested in (something she calls) visual improvisation. Exploring the parallels between sketching and improvisation, she experiments with the principles from her theater improvisation practice to inspire visual work.

(al)

© Louis Rosenfeld for Smashing Magazine, 2012.

0
Your rating: None

The web as we know and build it has primarily been accessed from the desktop. That is about to change. The ITU predicts that in the next 18–24 months, mobile devices will overtake PCs as the most popular way to access the web. If these predictions come true, very soon the web—and its users—will be mostly mobile. Even designers who embrace this change can find it confusing. One problem is that we still consider the mobile web a separate thing. Stephanie Rieger of futurefriend.ly and the W3C presents principles to understand and design for a new normal, in which users are channel agnostic, devices are plentiful, standards are fleeting, mobile use doesn’t necessarily mean “hide the desktop version,” and every byte counts.

0
Your rating: None

The prevalence of free, open WiFi has made it rather easy for a WiFi eavesdropper to steal your identity cookie for the websites you visit while you're connected to that WiFi access point. This is something I talked about in Breaking the Web's Cookie Jar. It's difficult to fix without making major changes to the web's infrastructure.

In the year since I wrote that, a number of major websites have "solved" the WiFi eavesdropping problem by either making encrypted HTTPS web traffic an account option or mandatory for all logged in users.

For example, I just noticed that Twitter, transparently to me and presumably all other Twitter users, switched to an encrypted web connection by default. You can tell because most modern browsers show the address bar in green when the connection is encrypted.

Twitter-https-encryption-indicators

I initially resisted this as overkill, except for obvious targets like email (the skeleton key to all your online logins) and banking.

Yes, you can naively argue that every website should encrypt all their traffic all the time, but to me that's a "boil the sea" solution. I'd rather see a better, more secure identity protocol than ye olde HTTP cookies. I don't actually care if anyone sees the rest of my public activity on Stack Overflow; it's hardly a secret. But gee, I sure do care if they somehow sniff out my cookie and start running around doing stuff as me! Encrypting everything just to protect that one lousy cookie header seems like a whole lot of overkill to me.

Of course, there's no reason to encrypt traffic for anonymous, not-logged-in users, and Twitter doesn't. You get a plain old HTTP connection until you log in, at which point they automatically switch to HTTPS encryption. Makes sense.

It was totally painless for me, as a user, and it makes stealing my Twitter identity, or eavesdropping on my Twitter activity (as fascinating as I know that must sound), dramatically more difficult. I can't really construct a credible argument against doing this, even for something as relatively trivial as my Twitter account, and it has some definite benefits. So perhaps Twitter has the right idea here; maybe encrypted connections should be the default for all web sites. As tinfoil hat as this seemed to me a year ago, now I'm wondering if that might actually be the right thing to do for the long-term health of the overall web, too.

ENCRYPT ALL THE THINGS

Why not boil the sea, then? Let us encrypt all the things!

HTTPS isn't (that) expensive any more

Yes, in the hoary old days of the 1999 web, HTTPS was quite computationally expensive. But thanks to 13 years of Moore's Law, that's no longer the case. It's still more work to set up, yes, but consider the real world case of GMail:

In January this year (2010), Gmail switched to using HTTPS for everything by default. Previously it had been introduced as an option, but now all of our users use HTTPS to secure their email between their browsers and Google, all the time. In order to do this we had to deploy no additional machines and no special hardware. On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead. Many people believe that SSL takes a lot of CPU time and we hope the above numbers (public for the first time) will help to dispel that.

HTTPS means The Man can't spy on your Internet

Since all the traffic between you and the websites you log in to would now be encrypted, the ability of nefarious evildoers to either …

  • steal your identity cookie
  • peek at what you're doing
  • see what you've typed
  • interfere with the content you send and receive

… is, if not completely eliminated, drastically limited. Regardless of whether you're on open public WiFi or not.

Personally, I don't care too much if people see what I'm doing online since the whole point of a lot of what I do is to … let people see what I'm doing online. But I certainly don't subscribe to the dangerous idea that "only criminals have things to hide"; everyone deserves the right to personal privacy. And there are lots of repressive governments out there who wouldn't hesitate at the chance to spy on what their citizens do online, or worse. Much, much worse. Why not improve the Internet for all of them at once?

HTTPS goes faster now

Security always comes at a cost, and encrypting a web connection is no different. HTTPS is going to be inevitably slower than a regular HTTP connection. But how much slower? It used to be that encrypted content wouldn't be cached in some browsers, but that's no longer true. And Google's SPDY protocol, intended as a drop-in replacement for HTTP, even goes so far as to bake encryption in by default, and not just for better performance:

[It is a specific technical goal of SPDY to] make SSL the underlying transport protocol, for better security and compatibility with existing network infrastructure. Although SSL does introduce a latency penalty, we believe that the long-term future of the web depends on a secure network connection. In addition, the use of SSL is necessary to ensure that communication across existing proxies is not broken.

There's also SSL False Start which requires a modern browser, but reduces the painful latency inherent in the expensive, but necessary, handshaking required to get encryption going. SSL encryption of HTTP will never be free, exactly, but it's certainly a lot faster than it used to be, and getting faster every year.

Bolting on encryption for logged-in users is by no means an easy thing to accomplish, particularly on large, established websites. You won't see me out there berating every public website for not offering encrypted connections yesterday because I know how much work it takes, and how much additional complexity it can add to an already busy team. Even though HTTPS is way easier now than it was even a few years ago, there are still plenty of tough gotchas: proxy caching, for example, becomes vastly harder when the proxies can no longer "see" what the encrypted traffic they are proxying is doing. Most sites these days are a broad mashup of content from different sources, and technically all of them need to be on HTTPS for a properly encrypted connection. Relatively underpowered and weakly connected mobile devices will pay a much steeper penalty, too.

Maybe not tomorrow, maybe not next year, but over the medium to long term, adopting encrypted web connections as a standard for logged-in users is the healthiest direction for the future of the web. We need to work toward making HTTPS easier, faster, and most of all, the default for logged in users.

[advertisement] What's your next career move? Stack Overflow Careers has the best job listings from great companies, whether you're looking for opportunities at a startup or Fortune 500. You can search our job listings or create a profile and let employers find you.

0
Your rating: None

tsu doh nimh writes "A Wikileaks-style war of attrition between two competing rogue Internet pharmacy gangs has exposed some of the biggest spammers on the planet. Brian Krebs uncovers fascinating information about a hacker named 'GeRa' who is supposedly behind the Grum botnet, which is currently sending about one out of every three spam emails worldwide. The story also points to several possible real-identities behind the Internet's largest spam machine."


Share on Google+

Read more of this story at Slashdot.

0
Your rating: None