Skip navigation
Help

Browsers

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.
Original author: 
Peter Bright

Aurich Lawson (with apologies to Bill Watterson)

Google announced today that it is forking the WebKit rendering engine on which its Chrome browser is based. The company is naming its new engine "Blink."

The WebKit project was started by Apple in 2001, itself a fork of a rendering engine called KHTML. The project includes a core rendering engine for handling HTML and CSS (WebCore), a JavaScript engine (JavaScriptCore), and a high-level API for embedding it into browsers (WebKit).

Though known widely as "WebKit," Google Chrome has used only WebCore since its launch in late 2008. Apple's Safari originally used the WebKit wrapper and now uses its successor, WebKit2. Many other browsers use varying amounts of the WebKit project, including the Symbian S60 browser, the BlackBerry browser, the webOS browser, and the Android browser.

Read 10 remaining paragraphs | Comments

0
Your rating: None

Windows 8 will arrive in consumers’ hands later this week and with it will come the first official release of Internet Explorer 10.

It used to be that a new version of IE meant a new set of headaches for developers, but thankfully that’s no longer the case. In fact, when it comes to web standards support IE 10 stacks up pretty well against the competition.

IE 10 adds support for nearly a dozen new HTML5 APIs like Web Sockets, Web Workers, the History API, the Drag and Drop API and the File API. You can look over a complete list on Microsoft’s IE 10 Guide for Developers. There’s plenty of CSS support in this release as well; Animations, Transitions and Transforms are among the many new CSS tools. IE 10 also has experimental support for next-gen layout tools like CSS Grid Layout, CSS Multi-column Layout and CSS Regions.

For all that is good in IE 10 there are a couple of gotchas web developers should be aware of.

One is that, while IE 10 supports CSS Flexible Box Layout, it appears to support the older, now non-standard version of Flexbox (the documenation still uses the old syntax). Hopefully Microsoft will fix this with an update, but for the time being only Chrome and Opera have implemented the updated Flexbox syntax.

The other quirk of IE 10 is related to how the browser behaves on Windows 8 tablets. There are two “modes” in Windows 8, the classic desktop and the Metro UI. When IE 10 runs in Metro mode (which is the default) there’s a feature that allows you to “snap” a window to the side of the screen so you can have a browser window open alongside other applications. It’s a nice feature for users, but it has one quirk developer should be aware of — when snapped, IE10 ignores the meta viewport tag for any viewport smaller than 400 pixels in width. That means that your responsive layouts for smaller screens won’t trigger in snapped mode and your site will be scaled instead. Luckily there’s a fix. In fact developer Tim Kadlec has two solutions, one that uses pixels and one that does not. See Kadlec’s blog for full details.

It’s also worth noting that Microsoft is supporting the @viewport declaration rather than the viewport meta tag (IE 10 uses the prefix: @-ms-viewport). While the viewport meta tag is more widely supported (and used), it’s not currently part of any W3C spec, draft or otherwise. For more on @viewport, see the Opera developer blog. (Opera is currently the only other browser supporting @viewport.)

0
Your rating: None

An image displayed on a computer after it was successfully commandeered by Pinkie Pie during the first Pwnium competition in March.

Dan Goodin

A hacker who goes by "Pinkie Pie" has once again subverted the security of Google's Chrome browser, a feat that fetched him a $60,000 prize and resulted in a security update to fix underlying vulnerabilities.

Ars readers may recall Pinkie Pie from earlier this year, when he pierced Chrome's vaunted security defenses at the first installment of Pwnium, a Google-sponsored contest that offered $1 million in prizes to people who successfully hacked the browser. At the time a little-known reverse engineer of just 19 years, Pinkie Pie stitched together at least six different bug exploits to bypass an elaborate defense perimeter designed by an army of some of the best software engineers in the world.

At the second installment of Pwnium, which wrapped up on Tuesday at the Hack in the Box 2012 security conference in Kuala Lumpur, Pinkie Pie did it again. This time, his attack exploited two vulnerabilities. The first, against Scalable Vector Graphics functions in Chrome's WebKit browser engine, allowed him to compromise the renderer process, according to a synopsis provided by Google software engineer Chris Evans.

Read 5 remaining paragraphs | Comments

0
Your rating: None

Image: HolySkittles/Flickr.

The Electronic Frontier Foundation (EFF) has released version 3.0 of its HTTPS Everywhere browser plugin, which will automatically redirect you to secure, HTTPS connections. HTTPS Everywhere 3.0 adds support for 1,500 more websites, twice as many as previous releases.

Firefox users can install HTTPS Everywhere directly from the EFF site. There’s also an alpha release available for Google’s Chrome web browser. Unfortunately, limited add-on APIs mean that HTTPS Everywhere isn’t available for other web browsers.

Once it’s installed, the HTTPS Everywhere extension makes it easy to ensure you’re connecting to secure sites by rewriting all requests to an HTTPS URL whenever you visit one of the thousands of sites HTTPS Everywhere supports.

Why all the fuss about HTTPS? Well, every time you log in to a website through a plain HTTP connection, you expose your data to the world. It’s a bit like writing your username and password on a postcard and dropping it in the mailbox. Think of an HTTPS connection as an envelope to protect your postcard from prying eyes.

The problem gets a bit more complicated than just HTTPS though. Most sites already use HTTPS to handle your login info — that’s a good first step — but once you’re logged in sites often revert back to using an insecure HTTP connection.

So why doesn’t the entire web use HTTPS all the time? The answer is slightly complicated, but the primary reason is speed. HTTPS can’t be cached on CDN networks, which means pages may load slightly slower than they would over standard, insecure connections. For smaller sites the added costs involved with HTTPS certificates make HTTPS more expensive. However neither of those stumbling blocks have stopped Google, Facebook, Twitter, Wikipedia or the thousands of other sites large and small that now offer HTTPS connections.

The EFF is still a long way from its long term goal of encrypting the entire web, but with more sites supporting HTTPS connections every day the web is slowly but surely getting more secure.

0
Your rating: None

The high-resolution retina display iPad has one downside — normal resolution images look worse than on lower resolution displays. On the web that means that text looks just fine, as does any CSS-based art, but photographs look worse, sometimes even when they’re actually high-resolution images.

Pro photographer Duncan Davidson was experimenting with serving high-resolution images to the iPad 3 when he ran up against what seemed to be a limit to the resolution of JPG images in WebKit. Serving small high-resolution images — in the sub-2000px range — works great, but replacing 1000px wide photographs with 2000px wide photos actually looks worse due to downsampling.

The solution (turns out) is to go back to something you probably haven’t used in quite a while — progressive JPGs. It’s a clever solution to a little quirk in Mobile Safari’s resource limitations. Read Davidson’s follow-up post for more details, and be sure to look at the example image if you’ve got a new iPad because more than just a clever solution, this is what the future of images on web will look like.

As Davidson says:

For the first time, I’m looking at a photograph I’ve made on a screen that has the same sort of visceral appeal as a print. Or maybe a transparency laying on a lightbox. Ok, maybe not quite that good, but it’s pretty incredible. In fact, I really shouldn’t be comparing it to a print or a transparency at all. Really, it’s its own very unique experience.

To show off the sample on his site Davidson uses a bit of JavaScript to toggle the high- and low-res images, highlighting the difference.

But how could you go about serving the higher res image to just those screens with high enough resolution and fast enough connections to warrant it?

You can’t.

So what’s a web developer with high-res images to show off supposed to do? Well, right now you’re going to have to decide between all or nothing. Or you can use a hack like one of the less-than-ideal responsive image solutions we’ve covered before.

Right now visitors with the new iPad are probably a minority for most websites, so not that many people will be affected by low-res or poorly rendered high-res images. But Microsoft is already prepping Windows 8 for high-res retina-style screens and Apple is getting ready to bring the same concept to laptops.

The high-res future is coming fast and the web needs to evolve just as fast.

In the long run that means the web is going to need a real responsive image solution; something that’s part of HTML itself. An new HTML element like the proposed <picture> tag is one possible solution. The picture element would work much like the video tag, with code that looks something like this:


 
 
 

The browser uses this code to choose which image to load based on the current screen width.

The picture element would solve one part of the larger problem, namely serving the appropriate image to the appropriate screen resolution. But screen size isn’t the only consideration; we also need a way to measure the bandwidth available.

At home on my Wi-Fi connection I’d love to get Davidson’s high-res images on my iPad. When I’m out and about using a 3G connection it would be better to skip that extra overhead in favor of faster page load times.

Ideally browsers would send more information about the user’s environment along with each HTTP request. Think screen size, pixel density and network connection speed. Developers could then use that information to make a better-informed guess about which images it to serve. Unfortunately, it seems unlikely we’ll get such tools standardized and widely supported before the high-res world overtakes the web. With any server-side solution to the bandwidth problem still far off on the horizon, navigator.connection will become even more valuable in the mean time.

Further complicating the problem are two additional factors, data caps on mobile connections and technologies like Apple’s AirPlay. The former means that even if I have a fast LTE connection and a high-resolution screen I still might not want to use my limited data allotment to download high-res images.

AirPlay means I can browse to a site with my phone — which would likely trigger smaller images and videos since it’s a smaller screen — but then project the result on a huge HD TV screen. This is not even a hypothetical problem, you can experience it today with PBS’s iPhone app and AirPlay.

Want to help figure out how the web needs to evolve and what new tools we’re going to need? Keep an eye on the W3C’s Responsive Images community group, join the mailing list and don’t be shy about contributing. Post your experiments on the web and document your findings like Davidson and countless others are already doing.

It’s not going to happen overnight, but eventually the standards bodies and the browser makers are going to start implementing solutions and the more test cases that are out there, the more experimenting web developers have done, the better those solutions will be. It’s your web after all, so make it better.

Photo: Ariel Zambelich/Wired

0
Your rating: None

The new 3-D Inspector: Your pages, in three dimensions.

Mozilla has released Firefox 11, adding some new developer tools, support for the SPDY protocol and the ability to sync your add-ons between computers.

This release is not recommended for drummers, but everyone else can grab Firefox 11 from the official Firefox download page, or you can just wait for the automated update system to work its magic.

The big news in this release is the new add-on syncing tool. Firefox Sync has long handled syncing bookmarks, preferences, passwords, history and open tabs across computers, but until now syncing add-ons was an entirely manual process. Add-on syncing has been a feature request for Firefox Sync pretty much since syncing was announced in 2010, but until to day it wasn’t available.

If you’d like to include add-ons in the list of items synced, just open up Firefox’s preference panel, head to the sync tab and check the new add-ons option.

Firefox 11 also has some new features for web developers, including the Tilt 3-D code inspector. Derived from the Tilt plug-in, the 3-D code inspector is a WebGL-based visualization of the page’s DOM and HTML structure. When you select “inspect element” Firefox will bring up a breadcrumb-style menu bar at the bottom of the page. In Firefox 11 you’ll find that a new button “3D” has joined the HTML and Style buttons in the page inspector menu bar.

This release adds a new Style Editor to Firefox’s developer toolkit. The Style Editor offers a two-pane view for browsing all of a webpage’s styles, both inline and external stylesheets. The right-hand pane displays the styles as plain text (with syntax highlighting), while the left pane shows the list of all your style sources. Make changes to the stylesheet and your changes are reflected on the webpage in real time. When you’ve got things looking the way you’d like you can then save the modified stylesheet.

If the new developer features convince you to switch back from Chrome, you’ll be glad to know that Firefox can now migrate your bookmarks, history, and cookies directly from Google Chrome.

Other new features in Firefox 11 include preliminary support for SPDY, Google’s alternative to the ubiquitous HTTP protocol. SPDY, pronounced “speedy,” isn’t quite ready for prime time yet in Firefox and is disabled by default. But if you’d like to test it out (Twitter is using SPDY where possible, as is Google) head to about:config and set network.http.spdy.enabled to true.

With Firefox 11 officially released, Firefox 12 moves to the beta channel and Firefox 13 to the Aurora channel. As of this writing, those channels don’t appear to have been updated just yet, but if you’re using either expect an update to arrive in the next day or two.

0
Your rating: None



Less than 24 hours after a Russian hacker pocketed $60,000 by exploiting a previously unknown critical vulnerability in Google Chrome, company developers released an update removing the security threat.

The quick turnaround underscores one of the key advantages of Google's open-source browser: the speed in which highly complex bugs are fixed and updates are pushed out to users. By contrast, Microsoft, which must run updates through a battery of rigorous quality-assurance tests, often takes months to fix bugs of similar complexity.

Read the rest of this article...

Read the comments on this post

0
Your rating: None

Adobe Shadow makes it easy to test your site on multiple devices at the same time. Photo: Adobe

Adobe Labs has released Adobe Shadow, a new project that offers a simple way to test your websites on multiple devices at the same time.

To try out Adobe Shadow, head on over to Adobe Labs and grab the desktop app and Chrome browser plugin, along with the Android and iOS offerings.

If you’ve never tried testing your site simultaneously on multiple devices, the fact that Shadow consist of four separate apps should give you some idea of how difficult it generally is. Thankfully, once you have all the pieces installed, Shadow makes the rest of the testing process as simple as hitting refresh. In fact, much of the time you don’t even need to do that — Shadow will automatically mirror whatever you’re doing on the desktop to the rest of your connected devices.

Though it’s still a beta release, Shadow may well be the most useful thing Adobe has ever built for web developers, particularly those that have embraced responsive design. It’s no secret that, while responsive design allows developers to easily target a wide range of screen sizes, it adds a considerable amount of work to the development process. But with Shadow mirroring your website across dozens of devices at the same time, testing becomes simple and easy. It’s a bit like synchronized swimming for web browsers. You can even debug and make changes directly in Chrome and then see the results on each device. To get an idea of how Shadow works, check out this overview video from Adobe:

There are two small problems with Shadow. The primary problem is that Shadow will only test your site in WebKit mobile browsers. We’d hate to see Shadow become yet another reason for developers to ignore non-WebKit browsers. So, while Shadow is great, it won’t give you the whole picture right now.

The good news is that Shadow is a beta release and a work in progress. I spoke with Bruce Bowman, Senior Product Manager of Shadow and, while he stopped short of committing to anything, Bowman made it clear that Adobe plans to keep expanding Shadow’s capabilities as the project progresses.

The other problem with Shadow isn’t actually a problem with Shadow directly, but its usefulness is nevertheless directly related to the number of iOS and Android devices you have on hand. Obviously those that will benefit most from Shadow are large web development shops with the budget to invest in dozens of mobile devices. Shadow is no less handy for individual developers with only one or two devices, though the results are of course limited.

Should Shadow prove popular, perhaps it will help spur the sort of device swap gatherings we’ve heard mobile expert Peter Paul Koch suggest — a group of web developers pool their resources, bring together a wide range of mobile devices and take turns testing websites. Shadow could make that process considerably easier and faster thanks to its live editing capabilities.

0
Your rating: None

Google has released an experimental version of the Chromium web browser with support for the company’s new Dart programming language. Dart, which is Google’s attempt to improve on JavaScript, has thus far not enjoyed much support outside of Google, but the company continues to push forward with its own efforts.

The new development preview version of the Chromium browser, the open source version of Google’s Chrome browser, contains the Dart Virtual Machine. This release, which Google is calling “Dartium,” can be downloaded from the Dart language website. At the moment it’s available only for Mac OS X and Linux. Google says a Windows version is “coming soon.” Keep in mind that this is a preview release and intended for developer testing, not everyday use.

Google originally created Dart to address the shortcomings of JavaScript and ostensibly speed up the development of complex, large-scale web applications.

While there is much programmers might like about Dart, it is, like Microsoft’s VBScript before it, a nonstandard language from a single vendor created without any regard for the existing web standards process. The new Dartium release is the first browser to include a Dart Virtual Machine and, based on the response from other browser makers to the initial release of Dart, likely the only browser that will ever ship with a Dart VM. For its part Google says it plans to incorporate the experimental Dart VM into Chrome proper in the future.

The company also has a plan for all those browsers that aren’t jumping on the Dart bandwagon — a compiler that translates Dart to good old JavaScript. In this scenario Dart ends up somewhat like CoffeeScript, a JavaScript abstraction that makes more sense to some programmers.

For more details on the new Dartium browser and the latest improvements to the Dart VM, be sure to check out the Google Code Blog announcement.

0
Your rating: None

Firefox 6 is now available. This update to the popular open source web browser comes just eight weeks after Firefox 5 was unveiled. The quick turnaround time and increasing version numbers are part of Mozilla’s new rapid release cycle.

You can grab the latest version of Firefox from the Mozilla downloads site or head to the About Firefox menu and apply the update.

Firefox 6 doesn’t bring any huge changes to the table, despite what the version number bump might imply (and in fact Mozilla is planning to hide the version number so future releases will just be "Firefox"), but it is up to 20 percent faster than Firefox 5.

The most noticeable change to the look of Firefox 6 is the move to a Chrome-style URL bar where the domain name is now darker than the rest of the URL. Firefox 6 doesn’t dispense with the HTTP prefix the way Chrome does, but Firefox 7, which will soon move from the Aurora to the Beta channel, will hide the http:// portion of the URL.

Firefox 6 does include some nice new tools for web developers. Scratchpad is a new JavaScript editor that’s well worth checking out, and the Web Console panel has also been improved.

For a complete list of everything that’s new in Firefox 6, check out the extensive release notes.

See Also:

0
Your rating: None