1/6/13

Java EE 7 Moves Forward

Oracle says next version of enterprise Java received Java Community Process executive approval; reference implementation due soon

Sphere: Related Content

Java EE 7 Platform Completes the JCP Final Approval Ballot

I'm happy to announce that the Java EE 7 Platform and Web Profile JSR has just passed the JCP Executive Committee Final Approval Ballot, with the support of an overwhelming majority of the committee members. This completes the JCP approval process for all of the JSRs under the Java EE 7 umbrella. This Java EE 7 Platform release — the first under Oracle's stewardship — comprises the following 14 JSRs and 9 Maintenance Releases (MRs), including the JSRs led by our partners in this effort, Red Hat (CDI and Bean Validation) and IBM (Batch). Of these JSRs, the WebSocket, JSON, Concurrency, and Batch JSRs are new to the Java EE Platform with this release.

JSRs:
  • Java Platform, Enterprise Edition 7 (JSR 342)
  • Concurrency Utilities for Java EE 1.0 (JSR 236)
  • Java Persistence 2.1 (JSR 338)
  • JAX-RS: The Java API for RESTful Web Services 2.0 (JSR 339)
  • Java Servlet 3.1 (JSR 340)
  • Expression Language 3.0 (JSR 341)
  • Java Message Service 2.0 (JSR 343)
  • JavaServer Faces 2.2 (JSR 344)
  • Enterprise JavaBeans 3.2 (JSR 345)
  • Contexts and Dependency Injection for Java EE 1.1 (JSR 346)
  • Bean Validation 1.1 (JSR 349)
  • Batch Applications for the Java Platform 1.0 (JSR 352)
  • Java API for JSON Processing 1.0 (JSR 353)
  • Java API for WebSocket 1.0 (JSR 356)
MRs:
  • Web Services for Java EE 1.4 (JSR 109)
  • Java Authorization Service Provider Contract for Containers 1.5 (JACC 1.5) (JSR 115)
  • Java Authentication Service Provider Interface for Containers 1.1 (JASPIC 1.1) (JSR 196)
  • JavaServer Pages 2.3 (JSR 245)
  • Common Annotations for the Java Platform 1.2 (JSR 250)
  • Interceptors 1.2 (JSR 318)
  • Java EE Connector Architecture 1.7 (JSR 322)
  • Java Transaction API 1.2 (JSR 907)
  • JavaMail 1.5 (JSR 919)
We'd like to thank all of the community members who have contributed to this process — in particular the members of our Expert Groups, members of our JSR projects on java.net (operating in the open under the JCP transparency program), members of the JUGs participating in the Adopt-a-JSR program, and participants in our outreach surveys. Stay tuned for our Java EE 7 GlassFish Reference Implementation release, coming shortly within the next couple of weeks. Java EE 7 Platform Completes the JCP Final Approval Ballot
Sphere: Related Content

20/5/13

SQLite versus Derby

Apache Derby is available at http://db.apache.org/derby/. It is also included standard in Java 6, under the name "Java DB".

Overall Both SQLite and Derby operate directly from disk. Only parts of the database file(s) that are needed in order to carry out the requested operations are read.  Zero-Administration Both SQLite and Derby offer zero-administration, embeddable SQL database engines. SQLite stores all the data in a single cross-platform disk file. Derby spreads its data across multiple disk files.  Host Language Support SQLite is written in ANSI-C. It supports bindings to dozens of languages, including Java. You cannot use Derby with languages that do not use the JAVA VM. Derby is written in Java and is thus usable only by Java and scripting languages that run on the Java VM (Jython, JRuby, Jacl, etc.) and is currently only exposed via JDBC driver. (There is an ODBC driver for Derby, but it is no longer maintained). However it is a 100% Java JDBC driver, and hence (with occasional glitches) runs cross platform on any JAVA VM with a single binary distribution. (SQLite is very portable as well, but you would have to maintain multiple binaries if shipping a cross-platform product).  SQL Language Support Derby supports all of SQL92 and most of SQL99. SQLite only supports a subset of SQL92, though the supported subset is large and covers the most commonly used parts of SQL92. Some differences are pointed out below. One specific difference: Derby supports RIGHT JOINS and FULL OUTER JOINS, SQLite does not.  Memory Utilization The code footprint of SQLite is less than 250KB. The code footprint for Derby is about 2000KB compressed and is thus more than 8 times larger. However a large amount of this difference is due to Derby's extensive localization and collation support for multiple languages built in. In general the memory utilization of Derby is considerably higher than SQLite, occupying several megabytes of memory.  Concurrency SQLite allows multiple simultaneous readers and a single writer. Multiple processes can have the database open simultaneously. Derby only allows a single process to have the database open at a time in its embedded mode. However, Derby also offers a full client/server mode. In client/server mode, Derby supports giving multiple processes access to the database with row-level locking. Client/server mode of course requires that there be a thread or process available to act as the server, and is less performant than embedded mode.  Roles, Security, Schemas Derby supports full encryption (see below). In addition it supports multiple databases, and full SQL role granting. Derby supports the SQL SCHEMAS, for separation of data in a single database, and full user authorization. SQLite is largely a single database at a time engine. The ATTACH DATABASE command can be used to partially ameliorate this. Because of this design, neither SQL roles are not implemented nor are Schemas. Typically, access to the SQLite disk file grants full access to the caller. This is not a defect, but by design.  Callable Procedures Derby has support for this built in. SQLite allows you to "fake" these, but has no comparable feature.  Typing/Keys SQLite supports only basic types - it is a mostly type-less system. Which can be very nice in some cases, and annoying in others. Derby supports a wide variety of data types, including XML. The foreign key and referential integrity support is also complete.  Built in utilities Derby has built in online backup/restore and database consistency check utilities. SQLite has a basic database consistency check utility, but no corresponding online backup/restore - you must close connections to the file to get a consistent backup. This is not usually a problem for SQLite, since concurrent usage by multiple PROGRAMS is not usually a design goal.  Encryption/Compression Derby has built in support for encryption and compression. SQLite has some optional add ins, but they are not part of the standard library.  Collation Support Both support custom collation functions. Derby comes with many multilingual collations and localizations built in - these have to be manually added to the core SQLite package by the programmer.  Case Sensitive LIKE Derby has case sensitive LIKE operator, SQLite does not. Derby supports custom collation and indices, like SQLite, but doesn't ship with a built in case-insensitive option.  Pagination Derby now fully supports pagination, albeit not with the non standard but common LIMIT, OFFSET paradigms. SQLite fully supports the non-standard-but-extremely useful LIMIT and OFFSET commands that Postgres and MySQL have adopted.  Replication/Failover Derby offers a basic master-slave replication system. SQLite does not have any such mechanism (again, this is rarely part of the design spec for usage of SQLite). There are JDBC clustering drivers available for Derby that allow failover to another Derby server.  Crash-Resistance Both Derby and SQLite are ACID compliant in their default configurations, so their databases will survive a program crash or even a power failure.  Database File Size No data is currently available on the relative sizes of the database files for SQLite and Derby. Both SQLite and Derby support compression of database files; however SQLITE's VACUUM command makes the database inaccessible during its run, whilst Derby's analogous procedure can be run online.  Full Text/Virtual Table Derby has no similar functions to compare to SQLite.  Encryption The ability to encrypt databases is built into Derby. For SQLite, stubs are left for the implementation, but the implementation itself is an extra cost feature.  Speed No data is currently available on the relative speed of SQLite and Derby database engines. Their query operation is similar in function, relative speed of different queries depend on cache-utilization, query plan optimization and implementation. However you should be prepared to unpredictable speed penalty when using Derby under different VMs even on same hardware/OS. Sphere: Related Content

26/4/12

LOL memory

Here's something worth sharing: to resist the harsh rigors of space, NASA used something called core rope memory in the Apollo and Gemini missions of the 1960s and 70s. The memory consisted of ferrite cores connected together by wire. The cores were used as transformers, and acted as either a binary one or zero. The software was created by weaving together sequences of one and zero cores by hand. According to the documentary Moon Machines, engineers at the time nicknamed it LOL memory, an acronym for "little old lady," after the women on the factory floor that wove the memory together. The information comes from the ibiblio Apollo archive, a comprehensive guide on the Apollo Guidance Computer which includes an emulator of the system that's well worth trying out. Sphere: Related Content

8/8/11

Take a Look at Windows 8

By George Norman - Software News Editor

At the time of writing this, the latest and greatest version of the Microsoft-developed Windows operating system is Windows 7. To date, more than 400 million Windows 7 licenses have been sold worldwide, which prompted Microsoft to say that Windows 7 is the fastest selling operating system in history. But Microsoft isn’t resting on its laurels; it is already working on the successor for Windows 7. Below you can check out some useful info about the upcoming operating system.

The name is Windows 8
During the development process of the current version of Windows, the team referred to it as codename Windows 7. Little before showcasing a pre-Beta developer-only release, Microsoft decided to adopt the codename as the operating system’s official name. With the successor of Windows 7, everyone assumed that Microsoft would use the name Windows 8. And they assumed correctly, but for a long time Microsoft denied that it would use that name and referred to the operating system as Windows Next.

This May, at a conference in Japan, Steve Ballmer referred to the upcoming version of Windows as Windows 8, prompting many to say that Windows 8 had been picked as the official name. At the time Microsoft released a retraction saying that “no final decision on a name had yet taken place”. Then earlier this month Microsoft confirmed that Windows 8 has been adopted as the official name of the upcoming operating system.

What will it run on (system requirements)
When Microsoft rolled out Windows 7 it wanted to ensure that every Windows Vista user out there (and even XP users) would be able to upgrade to Windows 7. That is why the minimum Windows 7 system requirements were not too scary. Here they are again:
   •   Processor: 32-bit or 64-bit 1GHz processor
   •   Memory (RAM): 1GB for the 32-bit edition, 2GB for the 64-bit edition
   •   Graphics card: DirectX 9.0 capable with WDDM (Windows Display Driver Model) 1.0 driver or better
   •   Graphics memory: 32MB
   •   HDD space: 16GB for the 32-bit version, 20GB for the 64-bit version.
  •    Other drives: DVD-ROM
  •    Audio: Audio Output
Microsoft does not want to alienate its Windows 7 userbase (as I’ve mentioned above, more than 400 million Windows 7 licenses have already been sold) and consequently it announced that Windows 8 will have the same system requirements as Windows 7, or perhaps even lower. This bit of info was made public by Corporate VP of Microsoft’s Windows Division, Tami Reller, at the Worldwide Partner Conference 2011 that took place this July in LA. Tami said that if a PC can run Windows 7 now, it will be able to run Windows 8 when it will be released to the public.

When it will be released
All we have to go on here are rumors as no official date has been presented by Microsoft. According to the rumors floating around on the web:
  •  A Beta version of Windows 8 will be released in September 2011 at the BUILD Conference. The rumor says that Microsoft will announce the release of Internet Explorer 10 (IE10) at the same conference.
  •  Windows 8 will reach the RTM (Release to Manufacturing) milestone in April 2012
  • Windows 8 will hit GA (General Availability; the moment when it’s available for purchase) in January 2013
When Steve Ballmer said once that Windows 8 would be released in 2012, Microsoft promptly issued a retraction saying that Ballmer misspoke. In my opinion if Microsoft does roll out Windows 8 in 2012, the operating system will hit GA by the end of August or beginning of September (the “back to school” period) or by December (the 2012 holiday season).

Windows 8 will have a new interface
We don’t have the full list of changes for Windows 8 just yet, but we do know is that the operating system will feature a redesigned user interface that has been optimized for touch devices (tablets). Instead of a Start menu there’s now a Start screen that features live application tiles; or to put it in other words, there’s now a tile-based Start screen instead of the classic Start menu. The live app tiles display notifications and up-to-date information from the user’s apps.
And speaking of apps, the new interface will allow the user to easily switch between apps; Microsoft said the process of switching between apps will be a fluid and natural thing. The apps can also be snapped and resized to the side of the screen, making multitasking that much easier. The apps will be web-connected and web-powered and built with HTML5 and JavaScript.

A video that presents that new interface optimized for touch devices is available below.

Microsoft not interested in your ideas for Windows 8
The Windows 7 advertising touted the fact that Windows 7 was the customers’ idea. So do you think Microsoft takes ideas from the public for Windows 8? Turns out that Microsoft is not interested on your ideas for Windows 8. Those who submit a suggestion for Windows 8 will receive a notification telling them that Microsoft does accept suggestions for existing products and services, but not for new products, technologies, processes.

Disney Director hired to help with Windows 8 campaign
To help out with the marketing campaign for the upcoming Windows 8 operating system, Microsoft has turned to former Disney Director of Brand Strategy Jay Victor. When he worked for Disney, Victor’s duties included “market research, business development, product development, creative, and marketing.” His job for Microsoft includes “accountability for brand stewardship on primary brand(s)” which is fancy talk for “he’ll be responsible for introducing Windows 8.”

Supports ARM chipsets
There's not much to say there: Windows 8 provides support for ARM chipsets as well. This means that Windows 8 will be the first viable Windows operating system for tablets.

Rumor roundup
Apart from the rumor that Microsoft will RTM in April 2012, there are a bunch of other rumors making the rounds online. Here’s a quick look at these rumors:
  •  Windows 8 will be safer as it will include SmartScreen, the URL reputation system and a file reputation system included in Internet Explorer 9
  •  Microsoft plans to drop the Windows brand following the release of Windows 8. This rumor says that sometime in 2015 or 2016, Microsoft will drop the Windows brand and will release an operating system for PCs, tablets, smartphones and Xbox.
  •  Windows 8 will provide support for Xbox 360 games and it will provide a subscription service similar to Xbox Live, but the online gaming will be carried out through the Windows Live Marketplace instead of Xbox Live.
  •  Windows 8 will include native support for 3D monitors
  •  Microsoft will release its own Windows 8 tablet Sphere: Related Content

28/7/11

15 Free Computer Science Courses Online

Alfred Thompson, Microsoft, 13 Aug 2009 3:58 AM

Trying something different today. Here is a guest post by Karen Schweitzer who has found a lot of interesting online courses in computer science. You can also find free curriculum resources at Microsoft’s Faculty Connection.

It is no longer necessary to pay tuition and enroll in a formal program to learn more about computer science. Some of the world's most respected colleges and universities now offer free courses online. Although these courses cannot be taken for credit and do not result in any sort of degree or certificate, they do provide high quality education for self-learners. Here are 15 computer science courses that can be taken for free online:

Introduction to Computer Science - Connexions, a Rice University resource, hosts this free course that introduces students to computer science. Covered topics include computer systems, computer networks, operating systems, data representation, and computer programming.

Introduction to Computer Science and Programming - This free Massachusetts Institute of Technology course provides an undergraduate-level introduction to computer science and computer programming. The course includes online readings, assignments, exams, and other study materials.

Mathematics for Computer Science - This free course, also from the Massachusetts Institute of Technology, teaches students how math is relative to computer science and engineering. Course materials include lecture notes, problem sets, assignments, and exams.

Introducing ICT Systems - The UK's Open University provides this free online computer science course to self-learners who want to gain an understanding of ICT (information and computer technologies) systems. The course is designed for introductory students and can be completed by most people in less than 10 hours.

Programming with Robots - Capilano University offers this free online computer science course to self-learners who want to explore computer programming and robotics. Course materials include tutorials, readings, lectures, exercises, assignments, and quizzes.

System Design and Administration - This free computer science course from Dixie State College focuses on computer information systems and technologies. The course introduces students to system design and administration through lectures notes, assignments, and other self-guided study materials.

HTML Basics - The University of Washington Educational Outreach Program offers several free courses, including this free HTML course. The course is designed for beginning level students who are unfamiliar with HTML documents, tags, and structure.

Software Applications - This free course from Kaplan University is a very basic course for people who want to learn more about using software applications. The course covers Internet applications as well as word processing, spreadsheet, communication, and presentation apps.

Object-Oriented Programming in C++ - The University of Southern Queensland offers this free computer science course to teach students the basics of C++ programming and object-oriented design. The course includes 10 modules, multiple lectures, and assignments.

Operating Systems and System Programming - This free online course from the University of California-Berkeley includes a series or audio and video lectures on operating systems and system programming.

Data Structures - This free audio/video course, also from the University of California-Berkeley, covers data structures through a series of online lectures.

Artificial Intelligence - The University of Massachusetts-Boston offers this free computer science course to self-learners who are interested in artificial intelligence (AI). The course uses assignments and other study materials to teach students how to write programs.

Information Theory - This advanced-level computer science course from Utah State University teaches concepts relating to the representation and transmission of information. Course materials include programming and homework assignments.

Network Security - This free computer science course from Open University is for master-level students who have substantial knowledge of computing. The course explores a wide range of topics, including network vulnerabilities, network attacks, encryption, cryptography, access control, and authentication.

Computational Discrete Mathematics - Carnegie Mellon University provides this free computer science course through the school's Open Learning Initiative (OLI). The self-guided course is ideal for independent learners who want to gain a better understanding of discrete mathematics and computation theory.

Guest post from education writer Karen Schweitzer. Karen is the About.com Guide to Business School. She also writes about online colleges for OnlineCollege.org.

COMMENTS
 -  Another resource is http://academicearth.org/subjects/computer-science
 -  Here is a link to MIT Open Courseware http://ocw.mit.edu/OcwWeb/web/home/home/index.htm. There are Computer Science Course and more. All Free.... Sphere: Related Content

27/6/11

OPL Language

OPL language: the battle of array declarations
The OPL Development Studio, created by ILOG (and recently acquired by IBM), provides tools based on the Optimization Programming Language. This tool intends to simplify the process of feeding data and model formulae to other ILOG optimization tools for mathematical programming and constraint programming.

Experience has been proven OPL to be extremely helpful as a compact and high level language for both data and model. Nevertheless, this language still reveals some constructs that are not well understood nor well documented.

For example, there are many pitfalls a novice developer will face on OPL while working with arrays. Here, and on subsequente articles, I will share some advices that would have been useful while I learned OPL.


What arrays are

An OPL array is a list of elements that are identified by an index. OPL is very strict for an array declaration:

o The index must be a element of discrete data type. Even more, those type must be the same for all indexes of the array.
o An array stores values of any type. Again, those type must be the same for all values of the array.
o All values that are possible as index must be enumerated at the array declaration. Of course, all those index values have to be of the same data type.
o This enumeration implies that, for every index value, there must be an element in the array. Than means that no position in the array may be left “empty”.
o Furthermore, the order the index values were enumerated determines the order that array elements are transversed.

Because of these restrictions on OPL arrays, they are not just a listing of elements, but may be understood an associative map that where each exactly index value has a relationship to exactly one element value.

An OPL array may also be seen as a discrete function array(index) =>element. I personally like to call this index value enumeration as domain of the array and the stored elements as image of the array.

How an array is declared with ranges
The simplest array declaration defines the domain as a range of consecutive integer values. The example associates associates the respective square for the integer numbers from 1 to 4:
int a[1..4] = [1, 4, 9, 16];

Observe that the declaration contains the domain (the range 1..4, all consecutive integer from 1 to 4: 1, 2, 3 and 4). The declaration also defines the image: 1, 4, 9, 16. Both domain and image are ordered sets that define a relation, meaning that a[1]=>1, a[2]=>4, a[3]=>9 and a[4]=>16.

The image could also be read from a data file:
int a[1..4] = ...;assuming there is a text file that contains a line as: a = [1, 4, 9, 16];

How an array is declared with formulaThe image does not need to be expressed as a list. Formula is also allowed.
int a[x in 1..4] = x*x;

Observe that the declaration still presents the domain (1..4) and the image (x*x). The formula is automatically evaluated for each value from the domain.

How an array is declared with ordered sets
Alternatively, the declaration may define the domain as an ordered set or primitive values (sequence of possibly non consecutive values). The example associates the respective squares for three arbitrary integer numbers:
int a[{1, 3, 6}] = [1, 9, 36];

A index of string data type must be declared as a set as there is no concept of “range of strings”. The example shows a function that associates a uppercase letter for each lower case letter.
int a[{"a", "b", "c", "d"}] = ["A", "B", "C", "D"];

How an array is declared with ordered sets of tuplesSince tuples are also discrete and unique (according to OPL convention), they may be used as indices for arrays. Again, one is required to declare a set of tuples as the domain for the index.
int a[{<1,2>, <3,3>, <4,5>}] = [3, 6, 9];

In this example, the domain is composed of a set of pairs of numbers. Each pair is associated to the sum of the numbers from the pair.

OPL and Java: loading dynamic Linux libraries
When calling IBM ILOG OPL (Optimization Programming Language) from a Java application running on Linux, one will face some issues regarding loading dynamic OPL libraries. Typical error messages look like:
Native code library failed to load: ensure the appropriate library (oplXXX.dll/.so) is in your path.
java.lang.UnsatisfiedLinkError: no opl63 in java.library.path
java.lang.UnsatisfiedLinkError: no opl_lang_wrap_cpp in java.library.path
java.lang.UnsatisfiedLinkError: no cp_wrap_cpp_java63 in java.library.path
java.lang.UnsatisfiedLinkError: no concert_wrap_cpp_java63 in java.library.path


This article explains my considerations and some approaches how to fix it.

According to the OPL Java Interface documentation, granting access to the OPL should be as simple as:
this.oplFactory = new IloOplFactory();
this.errorHandler = oplFactory.createOplErrorHandler();
this.settings = oplFactory.createOplSettings(this.errorHandler);
...


However, at the first time Java reaches a reference to any class that provides OPL, it will try to load all C-compiled dynamic libraries that implement the OPL interface. Under linux, this library is called oplXXX.so (where XXX is the OPL version, eg. 63 for 6.3) and usually found as a file ./bin/YYY/liboplXXX.so from the OPL installation directory (where YYY is the name of your operating system and machine architecture).

The easiest way to assure that Java finds the OPL library is passing its path on the java command line with the -Djava.library.path JVM parameter:
java -Djava.library.path=/opt/ilog/opl63/bin/x86-64_debian4.0_4.1 -jar OptApplication.jar

On other ILOG products, I used to write code that forces loading the library to avoid requiring the user to care about the -Djava.library.path JVM parameter.
try { // (does not work)
System.load("/opt/ilog/opl63/bin/x86-64_debian4.0_4.1/libopl63.so");
} catch (UnsatisfiedLinkError e) {
throw new OplLibraryNotFoundException(e);
}


Unfortunately, there is a hidden trap: the oplXXX.so itself has binary dependencies to other ILOG libraries. Both approaches (System.load and JVM parameter) will fail with an error message like:
java.lang.UnsatisfiedLinkError: /opt/ilog/opl63/bin/x86-64_debian4.0_4.1/libopl63.so: libdbkernel.so: cannot open shared object file: No such file or directory
java.lang.UnsatisfiedLinkError: no opl_lang_wrap_cpp in java.library.path
java.lang.UnsatisfiedLinkError: no cp_wrap_cpp_java63 in java.library.path
java.lang.UnsatisfiedLinkError: no concert_wrap_cpp_java63 in java.library.path


All required dependecies are, according to ldd:
libdbkernel.so, libdblnkdyn.so, libilog.so, libcplex121.so

One solution would be to load all the libraries in reverse order before referencing any OPL class:
try { // (does not work)
System.load("/opt/ilog/opl63/bin/x86-64_debian4.0_4.1/libilog.so");
System.load("/opt/ilog/opl63/bin/x86-64_debian4.0_4.1/libdbkernel.so");
System.load("/opt/ilog/opl63/bin/x86-64_debian4.0_4.1/libdblnkdyn.so");
System.load("/opt/ilog/opl63/bin/x86-64_debian4.0_4.1/libcplex121.so");
System.load("/opt/ilog/opl63/bin/x86-64_debian4.0_4.1/libopl63.so");
} catch (UnsatisfiedLinkError e) {
throw new OplLibraryNotFoundException(e);
}


Unfortunately, not all binary library dependencies conform to JNI and it is not possible to force pre-loading them.

It happens that the JVM, in order to load libopl63.so, passes control to LD GNU Linker, which is in charge to load libopl63.so and all its dependencies. The LD is a component of Linux and runs under the scope of the operating system. It is completely unaware of the JVM that called it. Therefore, it has no knowledge of the JVM configuration nor class loading policies. It will not look within paths listed by the -Djava.library.path JVM parameter. Instead, it was programmed to look for paths listed in LD_LIBRARY_PATH.

I agree that this is really odd. I checked thoroughly reference manuals/documentation and talked to experienced Linux system administrators. There is really nothing one can do with Java coding or configuring to fix this issue. The only solution is configuring the LD_LIBRARY_PATH environment variable to instruct LD where to locate additional OPL libraries. In order to call ones application, a redundant command line is required as:
LD_LIBRARY_PATH=/opt/ilog/opl63/bin/x86-64_debian4.0_4.1 java -Djava.library.path=/opt/ilog/opl63/bin/x86-64_debian4.0_4.1 -jar OptApplication.jar

Even worse, one needs to set LD_LIBRARY_PATH on each Java invocation. Editing bash_profile.sh or .bashrc is of little use, since most setuid’ tools (as bash or gdm that starts your graphical interface) do reset LD_LIBRARY_PATH for security reasons. And you practically all log-in access relies on a setuid’ application, LD_LIBRARY_PATH will always be reseted. Sphere: Related Content

18/4/11

The Dangers of HTML5: WebSockets and Stable Standards

By Cameron Laird

You celebrate: it's the first Friday after your start-up opens its first real office and a round of funding came through. This is going to be a good weekend. HTML5 has the technologies you need to make your idea for a Web-based massive multi-player game take off. Hardware-accelerated gaming in a browser is real and you're going to lead the way.

Until Monday, when you find that all the tests you'd already done, and all the demos you've staged, no longer matter. Your website crashes, the game freezes and there's nothing obvious you can do to bring it back.

What Happened to the WebSockets?

This story is a true one. It happened already to several teams that depend on the WebSocket protocol. How could things go so wrong? What protection can Web developers put in place to prevent being "burned" this way?

The short answer: constant vigilance. The WebSocket situation is more involved than any few-word explanation like, "he ran a red light" or "they didn't do back-ups." Like most real-world dramas, many factors came together to create the WebSocket situation:

The potential for "cross-layer" security exploits due to lack of testing
A highly unpredictable path for how technologies evolve across standards organizations
The role of browsers and browser vendors that support standards
The only insurance you have is to be aware of the changes that occur with unstable standards (and invest the time to support them). To see why there's no easy systematic fix, we need clarity about what HTML5 is, WebSocket's position within HTML5, and how standard-based development itself is evolving.

HTML5 and Application Development

HTML5 has significantly more potential than its predecessors. In the past, "Web Application" generally involved something no more sophisticated than a data-entry form like a college entrance examination or a tax return. Previous incarnations of Web standards went by several titles, including HTML4; they brought us to roughly the point that that made search engines, the cloud and the rest of Web 2.0 become possible.

HTML5, in contrast, is a collection of technologies that are emerging with varying degrees of stability. These range from hardware-accelerated graphics, audio, and video that can make a Web game feel like a native application to a mundane (but a highly valuable) approach to database standards like IndexedDB.

The Web is still the platform to reach the most people possible for relatively low cost. HTML5, in broad terms, will be the set of standards that make networked application development feasible across a range of platforms and devices. All the devices you use -- phones, game consoles, automobiles, TVs, point-of-sale installations, household appliances and more -- have the realistic potential to fulfill a single set of standards. That's quite an achievement for a set of technologies that are just emerging!

It is also not a single coherent definition or document like, say, HTTP1.1 (and we should recognize that even that rather well-controlled topic was published in seven distinct parts). HTML5 won't be completely finished for at least a few years more. So how do web developers take advantage of these technologies at varying levels of readiness? How do browsers play a role in supporting HTML5 standards with developers in mind?

Speed of Innovation vs. Spec Stability

The key actors behind HTML5 could make it "tight" -- more coherent, integrated and internally consistent. It would be more trustworthy and blemish-free. That would appear to make our choices as developers simpler.

Such an alternate-reality HTML5 would probably have taken an extra decade, and been unused on release. The real choice is not between a high- and low-quality standard; it's how best to balance flexibility and reliability in standardization. Moreover, when the standard lags too much, clever developers create their own techniques for solving their real problems, and further muddying the engineering landscape. The HTML5 sponsors did the right thing in modularizing the standard and its process. Parts of HTML5 are fairly well understood and noncontroversial; they just needed standardization, and a few of them have been usable in Web browsers for more than five years already.

Other parts are more difficult, including the WebSocket protocol. Understand that "difficulty" here isn't a euphemism for "written by people acting in bad faith" or "subject to an evil conspiracy." The problems HTML5 addresses are hard ones that demand careful design and engineering. Occasionally, with the best of intentions and even plenty of intense meetings, mistakes are made.

The Role of Browsers

Browsers and browser vendors like Google, Microsoft and Mozilla also play a role in how HTML5 specs play. Each one has a different perspective in how to balance the trade-offs between quick innovation and spec stability.

Google's Chrome and Mozilla's Firefox have generally mixed the stable specs from ones that are rapidly changing. With Internet Explorer 9, Microsoft has begun to distinguish stable vs. unstable specifications, keeping the latter out of the browser. Instead the company experiments with unstable specs at www.html5labs.com.

SVG makes for an interesting example: the first browser with practical display of Scalable Vector Graphics, late in 2000, was Internet Explorer 6, with an SVG plugin from Adobe. By 2005 and 2006, other browsers supported parts of the still-evolving SVG standard. IE9 introduced native support for most of SVG during 2010-2011, after a view that the SVG specification was adequately stable. While Microsoft probably could have supported it faster, IE did avoid putting Web developers through many of the pain-points that made it hard to test and, in some cases, led to site breakage as the spec changed.

So how do developers decide what to support when browser vendors disagree? For the foreseeable future, thinking of it in terms of "does browser B support HTML5?" simply won't make much sense; the pertinent question will be more along the lines of, "how well does a particular version of B support the particular version and parts of HTML5 that our implementation requires?" We should think of "support" here as the character or attitude of the browser rather than a particular feature, like a checkbox in a table. Suppose, for instance, that your application focuses on scheduling. The new datetime input datatypes are crucial to you. You need to analyze clearly which browser releases give you the input behavior you're after -- but you equally need to know how the browser providers decided on those behaviors, and therefore what the different browsers are likely to do as standards continue to develop. You also need to determine how whether you want to add support for something that will continue to change and likely break your web experience at times.

WebSockets: An Unstable Spec Case Study

Let’s go deeper into the WebSockets case. There's no question that mistakes were made with its early prototypes and their immediate acceptance regardless of stability. To understand how, you need to think first of the original Web, from the early years of the 1990s. Back then it was all "pull" -- a Web browser sends a request and retrieves a page to display. Needs for more general kinds of networking have been obvious for most of the last two decades; among all the technical fixes to this point, the AJAX model first accessible in Internet Explorer 5.0 in spring of 1999 represented the most dramatic advance.

Even Ajax imposes constraints on the responsiveness (latency) and capacity (bandwidth) of applications that have become unacceptable. The constraints have remained in large part because security is so hard to get right. The point of WebSockets is to solve this problem.

It seemed a "good enough" solution to be supported first in Chrome at the end of 2009. The spec kept changing and sites had to keep updating implementations as their sites broke. By Fall 2010, several browsers supported WebSocket capabilities. That was also when a team published a paper that described security vulnerabilities. The outcome: Firefox and Opera turned off WebSocket in their browsers. Internet Explorer chose not to carry WebSockets because it was too unstable to make a bet on the technology and instead prototype it. It's widely recognized that, WebSocket will continue to change and is not yet stable. It certainly will change and, when it becomes successful enough, will begin again to expand in capabilities and refinements.

As mentioned above, browser vendors have made different choices in regard to support of WebSockets. Who's right in all this? Maybe everyone. While partisans lob shots at Firefox and Google, respectively, for publishing browsers that are risky, and at Microsoft for conservatism, the choices aren't easy. Engineering is all about trade-offs, and the trade-offs in a case such as this are subtle and hard to compute with precision. Different organizations, developing for different markets, might justly make different choices. Microsoft Technical Evangelist Giorgio Sardo is certainly right when he emphasizes "It's important to get it right." Sardo doesn't mean something as simple as "always assume IE" or even "use only accepted standards." He admits that, "personally I like WebSockets" -- and he should! HTML5 is the way it is because bright people are working at the edge of our understanding to make the most of the Internet infrastructure as it exists right now. There are thousands of valuable applications waiting to be written, and HTML5 is mostly part of the solution, not the problem.

Finding the Balance

The lesson of WebSockets, then, is not to retreat and give up on HTML5. Instead, we should take these steps:
  • 1. Analyze clearly what parts of stable HTML5 pay-off for your site versus the risks of unstable spec development
  • 2. Research why browsers support specific HTML5 technologies and what it means to your end-user experience if you develop for them
  • 3. Plan your development balancing new technology with website stability be prepared to weigh the costs of supporting changing standards
  • 4. And of course, stay current and be active in the latest spec discussions
Find or become an HTML5 expert through sites like HTML5 Labs or WebSocket.org that make it easier to assess a new technology. Are you looking for a simple choice, like adopting HTML5 and then living happily ever after? That's not realistic. What is realistic is that, with a little effort invested in the appropriate technical communities, you and your teammates can stay current with the best Internet programming practices. If you're good enough, you can even have a hand in their creation.

About the Author
Cameron Laird is an experienced developer who has written hundreds of articles on programming techniques. He's particularly enthusiastic about HTML5; keep up with him through Twitter. Sphere: Related Content

10/4/11

Microsoft: Happy 36th birthday!

The company's story is an important part of Americana. But how much do you really know about it?
By Microsoft Subnet on Mon, 04/04/11 - 4:24pm.

Microsoft was founded on April 4, 1975, as a partnership between Bill Gates and Paul Allen. The company's history is an embedded part of Americana. Its leaders are household names. Its products grace just about every household in the land. It's story is the stuff of American legend and myth (a scrappy startup that turned into an international powerhouse).

Smile worthy too: Steve Ballmer as emoticons

But how much do you really know about the legendary software maker? Here's a quiz to test you. (Answers can be found on page 2 of this article.)

1. In what city and state was Microsoft founded? a. Bellevue, Wash.
b. Redmond, Wash.
c. Albuquerque, N.M
d. Tucson, Ariz.

2. How old were Bill Gates and Paul Allen when they founded Microsoft?'
a. Gates was 19. Allen was 22.
b. Gates was 17; Allen was 26.
c. Gates was 24; Allen was 30.
d. Gates was 26; Allen was 19.

3. On November 2, 2001, Microsoft and the Department of Justice came to an agreement on the DOJ's antitrust lawsuit against Microsoft. What was the product that originally sparked the lawsuit?
a. DOS
b. Excel
c. Internet Explorer
d. Windows

4. In what year did Steve Ballmer join Microsoft?
a. 1990
b. 2000
c. 1965
d. 1980

5. What year was the flagship Windows 3.0 released?
a. 1984
b. 2000
c. 1988
d. 1990

Answers:

1. C. Microsoft was founded in Albuquerque, New Mexico. It didn't move to Washington until 1979 (Bellevue). It moved to its Redmond HQ's in 1986.

2. A. The teenaged Gates was paired with a bushy-bearded Allen. Gates was only 19 but looked like he was 15. Allen was 22 but looked about 35.

3. C. Internet Explorer was the cause for the DOJ case against Microsoft in a case that began in 1998. Not only was it argued that bundling IE for free with Windows gave it an unfair advantage in the browser market, but that Microsoft fiddled with Windows to make IE perform better than third-party browsers. After extensions, government oversight of Microsoft as a result of the case is set to expire in May.

4. D. Steve Ballmer joined Microsoft in 1980 in an executive operations role. He was responsible for personnel, finance, and legal areas of the business. Although he became CEO in 2000, Bill Gates didn't retire from day-to-day operations until 2008, so Ballmer has only had solo reign of the company since that time. In 1980, Microsoft had year-end sales of $8M and 40 employees.

5. D. Windows 3.0 was released in 1990 and was the first wildly successful version of Windows. This version of Windows was the first to be pre-installed on hard drives by PC-compatible manufacturers. Two years later, Microsoft would release Windows 3.1. Together, these two versions of Windows would sell 10 million copies in their first two years. When Windows 95 was to launch in 1995, people stood in lines to buy their copy. Sphere: Related Content