Skip navigation
Help

Formal methods

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.

theodp writes "Microsoft's promotion of Julie Larson-Green to lead all Windows software and hardware engineering in the wake of Steven Sinofsky's resignation is reopening the question of what is the difference between Computer Science and Software Engineering. According to their bios on Microsoft's website, Sinofsky has a master's degree in computer science from the University of Massachusetts Amherst and an undergraduate degree with honors from Cornell University, while Larson-Green has a master's degree in software engineering from Seattle University and a bachelor's degree in business administration from Western Washington University. A comparison of the curricula at Sinofsky's and Larson-Green's alma maters shows there's a huge difference between UMass's MSCS program and Seattle U's MSE program. So, is one program inherently more compatible with Microsoft's new teamwork mantra?"


Share on Google+

Read more of this story at Slashdot.

0
Your rating: None

I am a recent high school graduate, who has been programming in C++ since I was 11. I know x86 Assembler, C/C++ and Java. I won't be learning anything new in CS for the first 2-3 semesters in college. The introduction class, algorithms and data structure classes are prerequisites for the rest of the CS curriculum. I have already taken Data Structures and Algorithms this past year, which was the practical, implementation approach. I am currently watching the algorithms class on MIT OCW to get a grasp of the more theoretical aspects. And I can say that I am understanding it; it's just a bit dry. This summer, I will be watching the multivariable calculus, intro to algorithms and Physics III lectures through MIT OCW. I have had experience in robotics through FIRST; I personally wrote an implemented my interpretations of the PID controller and Kalman Filter, but that's about it. I have followed the first week or two of the Udacity Robot Car class, but never really followed through. What fun thing can I do during summer? Euler's Project is on the list, writing an Android app is on the list. Perhaps, I can really dive into parallel processing through the use of my PS3. But those are all "practical" things.

What can I be doing to learn the more theoretical aspects of CS? To be honest, I really don't know much of what happens on the theoretical side of CS.

submitted by davidthefat
[link] [38 comments]

0
Your rating: None

The original question was: I’ve heard somewhere that they’re also trying to build computers using molecules, like DNA. In general would it work to try and simulate a factoring algorithm using real world things, and then let the physics of the interactions stand-in for the computer calculation? Since the real-world does all kinds of crazy calculations in no time.

Physicist: The amount of time that goes into, say, calculating how two electrons bounce off of each other is very humbling.  It’s terribly frustrating that the universe has no hang ups about doing it so fast.

In some sense, basic physical laws are the basis of how all calculations are done.  It’s just that modern computers are “Turing machines“, a very small set of all the possible kinds of computational devices.  Basically, if your calculating machine consists of the manipulation of symbols (which in turn can always be reduced to the manipulation of 1′s and 0′s), then you’re talking about Turing machines.  In the earlier epoch of computer science there was a strong case for analog computers over digital computers.  Analog computers use the properties of circuit elements like capacitors, inductors, or even just the layout of wiring, to do mathematical operations.  In their heyday they were faster than digital computers.  However, they’re difficult to design, not nearly as versatile, and they’re no longer faster.

Nordsieck's Differential Analyzer was an analog computer used for solving differential equations.

Any physical phenomena that represents information in definite, discrete states is doing the same thing a digital computer does, it’s just a question of speed.  To see other kinds of computation it’s necessary to move into non-digital kinds of information.  One beautiful example is the gravity powered square root finder.

Newtonian physics used to find the square root of numbers. Put a marble next to a number, N, (white dots) on the slope, and the marble will land on the ground at a distance proportional to √N.

When you put a marble on a ramp the horizontal distance it will travel before hitting the ground is proportional to the square root of how far up the ramp it started.  Another mechanical calculator, the planimeter, can find the area of any shape just by tracing along the edge.  Admittedly, a computer could do both calculations a heck of a lot faster, but they’re still descent enough examples.

Despite the power of digital computers, it doesn’t take much looking around to find problems that can’t be efficiently done on them, but that can be done using more “natural” devices.  For example, solutions to “harmonic functions with Dirichlet boundary conditions” (soap films) can be fiendishly difficult to calculate in general.  The huge range of possible shapes that the solutions can take mean that often even the most reasonable computer program (capable of running in any reasonable time) will fail to find all the solutions.

Part of Richard Courant's face demonstrating a fancy math calculation using soapy water and wires.

So, rather than burning through miles of chalkboards and a swimming pools of coffee, you can bend wires to fit the boundary conditions, dip them in soapy water, and see what you get.  One of the advantages, not generally mentioned in the literature, is that playing with bubbles is fun.

Today we’re seeing the advent of a new type of computer, the quantum computer, which is kinda-digital/kinda-analog.  Using quantum mechanical properties like super-position and entanglement, quantum computers can (or would, if we can get them off the ground) solve problems that would take even very powerful normal computers a tremendously long time to solve, like integer factorization.  “Long time” here means that the heat death of the universe becomes a concern.  Long time.

Aside from actual computers, you can think of the universe itself, in a… sideways, philosophical sense, as doing simulations of itself that we can use to understand it.  For example, one of the more common questions we get are along the lines of “how do scientists calculate the probability/energy of such-and-such chemical/nuclear reaction”.  There are certainly methods to do the calculations (have Schrödinger equation, will travel), but really, if you want to get it right (and often save time), the best way to do the calculation is to let nature do it.  That is, the best way to calculate atomic spectra, or how hot fire is, or how stable an isotope is, or whatever, is to go to the lab and just measure it.

Even cooler, a lot of optimization problems can be solved by looking at the biological world.  Evolution is, ideally, a process of optimization (though not always).   During the early development of sonar and radar there were (still are) a number of questions about what kind of “ping” would return the greatest amount of information about the target.  After a hell of a lot of effort it was found that the researchers had managed to re-create the sonar ping of several bat species.  Bats are still studied as the results of what the universe has already “calculated” about optimal sonar techniques.

You can usually find a solution through direct computation, but sometimes looking around works just as well.

0
Your rating: None

alphadogg writes "Judea Pearl, a longtime UCLA professor whose work on artificial intelligence laid the foundation for such inventions as the iPhone's Siri speech recognition technology and Google's driverless cars, has been named the 2011 ACM Turing Award winner. The annual Association for Computing Machinery A.M. Turing Award, sometimes called the 'Nobel Prize in Computing,' recognizes Pearl for his advances in probabilistic and causal reasoning. His work has enabled creation of thinking machines that can cope with uncertainty, making decisions even when answers aren't black or white."


Share on Google+

Read more of this story at Slashdot.

0
Your rating: None

Retread of post orginally made on 18 Mar 2004

I was attending a workshop at XP/Agile Universe in 2002 when the
phrase 'Specification By Example' struck me as a way to describe one
of roles of testing in XP.

(These days it's terribly unfashionable to talk about Test Driven
Development (TDD) having anything to do with testing, but like Jon I do think that having a comprehensive automated test suite
is more valuable than the term 'side-effect' implies. Rather like if
someone offered me a million dollars to hike up a mountain. I may say
that the main purpose of the hike is the enjoyment of nature, but the
side-effect to my wallet is hardly trivial....)

Specification By Example isn't the way most of us have been
brought up to think of specifications. Specifications are supposed to
be general, to cover all cases. Examples only highlight a few points,
you have to infer the generalizations yourself. This does mean that
Specification By Example can't be the only requirements technique you
use, but it doesn't mean that it can't take a leading role.

So far the dominant idea with rigorous specifications, that is
those that can be clearly judged to be passed or failed, is to use pre
and post conditions. These techniques dominate in formal methods, and
also underpin Design

by Contract. These techniques have their place, but they aren't
ideal. The pre-post conditions can be very easy to write in some
situations, but others can be very tricky. I've tried to use them in a
number enterprise application settings, and I've found that in many
situations it's as hard to write the pre and post conditions as it is
to write the solution. One of the great things about specification by
example is that examples are usually much easier to come up with,
particularly for the non-nerds who we write the software for. (Timothy Budd pointed out that while you can
show a lot of stack behavior with pre and post conditions, it's very
tricky to write pre and post conditions that show the LIFO
property.)

An important property of TDD tests is that they involve a
double-check. In fact this highlights an amusing little lie of the XP
community. They make a very strong point of saying things Once and
Only Once, but in fact they always say things twice: once in the code
and once in the tests. Kent has pointed out that this double-check
principle is a vital principle, and it's certainly one that humans use
all the time. The value of the double check is very much tied into
using different methods for each side of the double check. Combining
Specification By Example with production code means that you have both
things said quite differently, which increases their ability to find
errors.

The formal specification community have constantly had trouble
verifying that a design satisfies a specification, particularly in
doing this without error prone humans. For Specification By Example,
this is easy. Another case of Specification By Example being less
valuable in theory but more valuable in practice.

One Design by Contract fan pointed out that if you write a
specification in terms of tests, then the supplier could satisfy the
specification by just returning hard-coded responses to the specific
test inputs. My flippant answer to this is that if you are concerned
about this then the issue of tests versus Design by Contract is the
least of your worries. But there is a serious point here. Tests are
always going to be incomplete, so they always have to be backed up
with other mechanisms. Being the twisted mind that I am, I actually
see this as a plus. Since it's clear that Specification By Example
isn't enough, it's clear that you need to do more to ensure that
everything is properly communicated. One of the most dangerous things
about a traditional requirements specification is when people think
that once they've written it they are done communicating.

Specification By Example only works in the context of a working
relationship where both sides are collaborating and not fighting. The
examples trigger abstractions in the design team while keeping the
abstractions grounded. You do need more - things like regular
conversation, techniques like Domain Driven
Design
, indeed even doses of Design by Contract. While I've
never had the opportunity to use Design by Contract fully (i.e. with
Eiffel) I regularly think about interfaces in contractual terms.
Specification By Example is a powerful tool, perhaps my most used
tool, but never my only tool.

(For more thinking on Specification By Example, if not by that
name, take a look at Brian Marick's
writings. Somewhere on his site there probably is one super page that
sums up his view on it. Of course finding it is less valuable than
reading all the stuff there while you're trying)

Some Later Comments

A couple of years after I first wrote this, there was a post by Cedric
Beust
that was critical of agile methods that
caused a minor blog spat. There were rebuttals by Jeff Langr and Bob
Martin
, and I sent this post through the feed again. Jeff Langr
later added a nice
example
using using tests as specification by example for Java's
Math.pow function.

reposted on 17 Nov 2011

0
Your rating: None

Iterative design is one of the most powerful development tools available to professionals. However, if you haven't used it before or aren't that familiar with it, huge problems can arise that limit the effectiveness of it.

0
Your rating: None


LLBMC: The Low-Level Bounded Model Checker

Google Tech Talk (more info below) February 22, 2011 Presented by Carsten Sinz, Stephan Falke, & Florian Merz, Karlsruhe Institute of Technology, Germany. ABSTRACT Producing reliable and secure software is time consuming and cost intensive, and still many software systems suffer from security vulnerabilities or show unintended, faulty behavior. Testing is currently the predominant method to ascertain software quality, but over the last years formal methods like abstract interpretation or model checking made huge progress and became applicable to real-world software systems. Their promise is to reach a much higher level of software quality with less effort. In this talk we present a recent method for systematic bug finding in C programs called Bounded Model Checking (BMC) that works fully automatic and achieves a high level of precision. We present our implementation, LLBMC, which---in contrast to other tools---doesn't work on the source code level, but employs a compiler intermediate representation (LLVM IR) as a starting point. It is thus easily adaptable to support verification of other programming languages such as C++ or ObjC. LLBMC also uses a highly precise (untyped) memory model, which allows to reason about heap and stack data accesses. Moreover, we show how LLBMC can be integrated into the software development process using a technique called Abstract Testing.
From:
GoogleTechTalks
Views:
1009

5
ratings
Time:
01:10:30
More in
Science & Technology

0
Your rating: None