Skip navigation

shared memory

warning: Creating default object from empty value in /var/www/vhosts/ on line 33.

An anonymous reader wrote in with a story on OS News about the latest release of the Genode Microkernel OS Framework. Brought to you by the research labs at TU Dresden, Genode is based on the L4 microkernel and aims to provide a framework for writing multi-server operating systems (think the Hurd, but with even device drivers as userspace tasks). Until recently, the primary use of L4 seems to have been as a glorified Hypervisor for Linux, but now that's changing: the Genode example OS can build itself on itself: "Even though there is a large track record of individual programs and libraries ported to the environment, those programs used to be self-sustaining applications that require only little interaction with other programs. In contrast, the build system relies on many utilities working together using mechanisms such as files, pipes, output redirection, and execve. The Genode base system does not come with any of those mechanisms let alone the subtle semantics of the POSIX interface as expected by those utilities. Being true to microkernel principles, Genode's API has a far lower abstraction level and is much more rigid in scope." The detailed changelog has information on the huge architectural overhaul of this release. One thing this release features that Hurd still doesn't have: working sound support. For those unfamiliar with multi-server systems, the project has a brief conceptual overview document.

Share on Google+

Read more of this story at Slashdot.

Your rating: None

I just started reading a book on parallel programming, and I immediately began to wonder how it affects time complexity.

I understand that theory and programming are not the same, but time complexity seems to rely on the notion that the algorithm is executed serially—will the mathematics to describe algorithms need to change to accomodate this?

For example, a binary search algorithm could be written such that each element in the set could be assigned to CPU core (or something similar). Then, wouldn't the algorithm essentially be constant time, O(1), instead of O(log n).

Thanks for any insightful response.

submitted by rodneyfool
[link] [15 comments]

Your rating: None

Download the eXtremeDB data sheet (PDF)

Explore the key eXtremeDB features that enable developers to create the most advanced software applications using McObject's real-time database technology.

The eXtremeDB™ In-Memory Database System embedded database is McObject's core product: an exceptionally fast database management system, designed for performance, with a strict memory-based architecture and direct data manipulation. (For a database that incorporates on-disk persistence, see McObject's eXtremeDB Fusion product.) Storing and manipulating data in exactly the form used by the application removes overheads associated with caching and translation. Typical read and write accesses are at the level of a few microseconds, or less. The engine is reentrant, allowing for multiple execution threads, with transactions supporting the ACID properties, assuring transaction and data integrity.

The Runtime Environment

The eXtremeDB Runtime Environment provides:

  • Accelerated transactions. The eXtremeDB in-memory database stores data entirely in main memory, eliminating the need for disk access, caching and other processes that add overhead to disk-based databases. The eXtremeDB transaction manager is optimized for high transaction rates.
  • Ultra-small footprint. By intelligently redesigning and streamlining core database functions, McObject offers an in-memory database system (IMDS) with an unbelievably small RAM footprint of approximately 100K! This makes eXtremeDB a powerful enhancement to many intelligent devices with resource limits that, until now, ruled out the use of an embedded database system.
  • Direct data access. Earlier data management technology required copying records from database storage to cache, and then to a new location for manipulation by the application. By operating as a RAM database that works with data directly in main memory, eXtremeDB eliminates the overhead of duplicate data sets and of copying data between locations.
  • No Translation. eXtremeDB stores data in the exact form in which it is used by the application - no mapping a C data element to a relational representation, for example, or requiring additional code to pick fields from tables and copy them to C structures. By eliminating this overhead, eXtremeDB reduces memory and CPU demands.
  • High reliability. For data integrity, eXtremeDB transactions support the ACID properties, ensuring that operations grouped into transactions will complete together or the database will be rolled back to a pre-transaction state.
  • eXtremeDB provides two APIs. The first is a standard function library for basic operations like cursor movement and opening and closing the database. The second API, for manipulating data, derives from the given application’s data model and thus reflects the purpose and schema for which it is being used. For the runtime environment, this means more reliable code – the C/C++ compiler will catch data typing and assignment errors when the application is built. This makes eXtremeDB-based applications more reliable, since it is much harder for coding errors to make it into the final build.

eXtremeDB is designed for High-Performance: optimized memory managers,  hash and tree-based indices, multiple data layouts, transactions with priority scheduling and an application-specific API.


Shared Memory Databases

In addition to the eXtremeDB embedded database that operates in conventional memory, a shared memory version is available for multi-processing environments such as Solaris, QNX or Linux. With this version, an eXtremeDB database is created in shared memory and mapped to the local address space of each process, thereby allowing multiple processes and multiple threads within each process to share eXtremeDB in-memory databases.

The shared memory eXtremeDB runtime is built as a different binary (library or archive) than the conventional memory version. A single process can still create and connect to the database from multiple threads using the shared memory runtime, however, the database(s) will be placed into shared memory instead of the process’ memory. Depending on the target platform, eXtremeDB supports one of the following three synchronization methods when managing shared memory databases:

  • A System V semaphore mechanism is implemented for operating environments such as Sun Solaris and Linux platforms (System V semaphore methods are associated with system-wide integers, called keys, which are associated with files).

  • A POSIX shared memory implementation that is suitable for QNX 4.x and QNX 6.x platforms.

  • A Win32 shared memory implementation is deployed for the Microsoft Windows Embedded and Microsoft Windows classic platforms.

 eXtremeDB XML Extensions

McObject developed the eXtremeDB XML Extensions to facilitate simple schema evolution and the exchange of data between the eXtremeDB embedded database and external systems. With the XML-enabled version, the eXtremeDB schema compiler generates new interface functions for each object that provide the means to

  • retrieve an object encoded as XML
  • create an object in the database from an XML document.
  • replace (update) the contents of an object already in the database with the content of an XML document
  • generate the XML schema for each class in the database.

The XML interface functions could be used, for instance, in concert with the eXtremeDB event notifications to cause data to be shared between an eXtremeDB embedded database and other XML-enabled systems when something of interest changes in the database. The XML interfaces can also be used to facilitate simple schema evolution by exporting the database to XML documents, adding/dropping fields, indexes, and classes, and importing the saved XML documents into the new database.

eXtremeDB XML and eXtremeDB XML schema encoding were developed in accordance with the W3C SOAP encoding recommendations. Thus, the XML interface functions could also be used in conjunction with an embedded Web server to deliver an embedded systems database context (from an application running within a consumer electronics device, for example) to a Web browser or any other SOAP client. The picture below illustrates this process.

SOAP Standards

W3C SOAP encoding recommendations can be found in the following documents:

XML schema encoding recommendations can be found in the follow W3C documents:

 The Development Environment

Developers strive to produce readable, maintainable, efficient code in the shortest possible time. The eXtremeDB in-memory database system (IMDS) includes several features that boost the developer’s capabilities when integrating the database in demanding real-time applications. Incorporating third party software often means learning and adopting an API that does not completely fit an application. eXtremeDB’s project-specific API ensures that each database operation in the API reflects the type of the data being manipulated.

To help in application debugging, McObject’s embedded database includes numerous traps in the runtime code; these can be selectively disabled as development and testing progresses, to optimize the application for speed.

McObject offers full source code, to give an in-depth understanding of eXtremeDB within an application. In addition, eXtremeDB supports virtually all data types as well as extremely efficient indexing for queries. For querying, McObject provides hash indexes for exact match searches; tree indexes for pattern match, range retrieval and sorting; and object-identifier references, for direct access. Rather than storing duplicate data, indexes contain only a reference to data, keeping memory requirements for the RAM database to an absolute minimum.

With eXtremeDB embedded database, the developer focuses on the data definition first, then eXtremeDB generates the API from this definition via the schema compiler.

The result is:

  • An easy-to-learn API that is optimized for the application.

  • Code that is more legible as well as easier to write and maintain.

  • Compile-time type checking that helps eliminate coding errors.

Example: The following is a (simple) class and an example of the API to put a new value into a record in the database:

class Measurement {
  string measure;
  time timestamp;

  unique tree <measure, timestamp> trend;

Measurement_measure_put(&m, meas);
Measurement_timestamp_put(&m, value);

Progressive error detection and consistency checking features

If an application mistakenly passes a corrupted transaction or object handle into a runtime method, eXtremeDB (by default) raises a fatal exception and stops the execution of the program. In most cases, the application developer can then withdraw and examine the call stack to find the source of corruption. The eXtremeDB runtime implements many verification traps and consistency checks. Obviously, that does not come free; the runtime requires extra CPU cycles and space for that. However, when the application is debugged and consistently passes verification tests, developers can generate the optimized version of the eXtremeDB runtime, removing the traps and internal checks to restore valuable clock cycles.

Complex data types and efficient queries

  • Supports virtually all data types, including structures, arrays, vectors and BLOBs

  • Querying methods include hash indexes for exact match searches

  • Tree indexes support queries for pattern match, range retrieval and sorting

  • “Voluntary” indexes for program control over index population

  • Object-identifier references provide direct data access

  • Autoid for system-defined object identifiers

  • Rather than storing duplicate data, indexes contain only a reference to data, minimizing memory requirements

  • Synchronous/asynchronous event notifications

  • Optional object history

Supported Platforms

Embedded Platforms:

  • VxWorks
  • QNX Neutrino
  • Various Real-Time Linux distributions
  • Lynx OS
  • RTXC Quadros, RTXC
  • Microsoft Windows Embedded Platforms
  • Windows Real-Time Extensions
  • Bare bones boards (no operating system required)

Server and Desktop Platforms:

  • Sun Solaris
  • HP-UX
  • Linux distributions
  • Classic Windows platforms (98/NT/2000/XP/Vista)

 Development Environments:

  • gnu toolchain (gcc 2.95 and higher)
  • Tornado (GNU and Diab compilers)
  • QNX Momentics IDE (C, C++, Embedded C++)
  • Metrowerks CodeWarrior IDE (various platforms)
  • GreenHills Multi
  • Microsoft Visual Studio (C/C++)
Your rating: None

One of the things I like most about Google's Chrome web browser is how often it is updated. But now that Chrome has rocketed through eleven versions in two and a half years, the thrill of seeing that version number increment has largely worn off. It seems they've picked off all the low hanging fruit at this point and are mostly polishing. The highlights from Version 11, the current release of Chrome?

HTML5 Speech Input API. Updated icon.

Exciting, eh? Though there was no shortage of hand-wringing over the new icon, of course.

Chrome's version number has been changing so rapidly lately that every time someone opens a Chrome bug on a Stack Exchange site, I have to check my version against theirs just to make sure we're still talking about the same software. And once -- I swear I am not making this up -- the version incremented while I was checking the version.

another nanosecond, another Chrome version.

That was the day I officially stopped caring what version Chrome is. I mean, I care in the sense that sometimes I need to check its dogtags in battle, but as a regular user of Chrome, I no longer think of myself as using a specific version of Chrome, I just … use Chrome. Whatever the latest version is, I have it automagically.

For the longest time, web browsers have been strongly associated with specific versions. The very mention of Internet Explorer 6 or Netscape 4.77 should send a shiver down the spine of any self-respecting geek. And for good reason! Who can forget what a breakout hit Firefox 3 was, or the epochs that Internet Explorer 7, 8 and 9 represent in Microsoft history. But Chrome? Chrome is so fluid that it has transcended software versioning altogether.


This fluidity is difficult to achieve for client software that runs on millions of PCs, Macs, and other devices. Google put an extreme amount of engineering effort into making the Chrome auto-update process "just work". They've optimized the heck out of the update process.

Rather then push put a whole new 10MB update [for each version], we send out a diff that takes the previous version of Google Chrome and generates the new version. We tried several binary diff algorithms and have been using bsdiff up until now. We are big fans of bsdiff - it is small and worked better than anything else we tried.

But bsdiff was still producing diffs that were bigger than we felt were necessary. So we wrote a new diff algorithm that knows more about the kind of data we are pushing - large files containing compiled executables. Here are the sizes for the recent 190.1 -> 190.4 update on the developer channel:

  • Full update: 10 megabytes
  • bsdiff update: 704 kilobytes
  • Courgette update: 78 kilobytes

The small size in combination with Google Chrome's silent update means we can update as often as necessary to keep users safe.

Google's Courgette -- the French word for Zucchini, oddly enough -- is an amazing bit of software optimization, capable of producing uncannily small diffs of binary executables. To achieve this, it has to know intimate details about the source code:

The problem with compiled applications is that even a small source code change causes a disproportional number of byte level changes. When you add a few lines of code, for example, a range check to prevent a buffer overrun, all the subsequent code gets moved to make room for the new instructions. The compiled code is full of internal references where some instruction or datum contains the address (or offset) of another instruction or datum. It only takes a few source changes before almost all of these internal pointers have a different value, and there are a lot of them - roughly half a million in a program the size of chrome.dll.

The source code does not have this problem because all the entities in the source are symbolic. Functions don't get committed to a specific address until very late in the compilation process, during assembly or linking. If we could step backwards a little and make the internal pointers symbolic again, could we get smaller updates?

Since the version updates are relatively small, they can be downloaded in the background. But even Google hasn't figured out how to install an update while the browser is running. Yes, there are little alert icons to let you know your browser is out of date, and you eventually do get nagged if you are woefully behind, but updating always requires the browser to restart.


Web applications have it far easier, but they have version delivery problems, too. Consider WordPress, one of the largest and most popular webapps on the planet. We run WordPress on multiple blogs and even have our own WordPress community. WordPress doesn't auto-update to each new version, but it makes it as painless as I've seen for a webapp. Click the update link on the dashboard and WordPress (and its add-ons) update to the latest version all by themselves. There might be the briefest of interruptions in service for visitors to your WordPress site, but then you're back in business with the latest update.


WordPress needs everyone to update to the latest versions regularly for the same reasons Google Chrome does -- security, performance, and stability. An internet full of old, unpatched WordPress or Chrome installations is no less dangerous than an internet full of old, unpatched Windows XP machines.

These are both relatively seamless update processes. But they're nowhere near as seamless as they should be. One click updates that require notification and restart aren't good enough. To achieve the infinite version, we software engineers have to go a lot deeper.


Somehow, we have to be able to automatically update software while it is running without interrupting the user at all. Not if -- but when -- the infinite version arrives, our users probably won't even know. Or care. And that's how we'll know we've achieved our goal.

Your rating: None