4/6/10

Conclusions on Parallel Computing

By Asaf Shelly (21 posts) on April 9, 2010 at 11:10 am

We have been dealing with parallel computing for some while now. Some of the ideas we had at the start proved to be wrong while others are only becoming relevant in the near future. No doubt about it, parallel computing was pushed and forced into the mainstream of computing just as Object Oriented was in the previous millennia.

Some History: Hardware

The first to deal with parallel computing were hardware developers because the hardware supports multiple devices working at the same time, with different operation rates and response times. Hardware design is also Event Driven because devices work independently and issue an Interrupt event when required. The computer hardware we know today is fully parallel however it is centralized with a single CPU (Central Processing Unit) and multiple peripheral devices.

Some History: Kernel

The next to support parallel computing was the software infrastructure which in modern operating systems is the Kernel. The Kernel must support multiple events coming in the form of Hardware Interrupts and propagated upwards as Software Events. Kernels are commonly distributed in design as several Drivers can communicate with each other. The centralized object in the system is allowing communication between the drivers and supports synchronization but is not supposed to contribute to the application's business logic in any form or way.

Some History: Network

UNIX is based on services. A Service is a way to call a function over network. Network technologies required distributed design in which every element is completely parallel to the next and there is no single 'processor unit' as the system's master. UNIX took this to the next level with technologies such as services, pipes, sockets, mailslots, Fork and more. At a time when programming was a tedious work, developing an operating system to support Fork meant extensive efforts. Still UNIX had built in support for that mechanism which solves so many problems... Only we forgot how to use it and I don't remember seeing a new system design that had Fork in it.

Some History: Applications

When I just started with C programming and have just found out about threads I tried doing things in parallel just to see how it works. The result was, as you can imagine, by far worse. The application runs much slower, there are "Random Bugs" and the code looks terrible. The explanation I got was that there is only one CPU and the different threads compete over it. No Multi-Core CPU means that there is no ROI (return on investment) for using multiple threads and the large efforts required for a parallel design. The only reason to use a thread is when you really have to for example when there is need to wait for hardware or a network buffer.

Parallel Computing Today

A few years ago CPUs got to a certain hardware limitation which would have required special cooling. At this point the race to reduce silicon size and increase clock frequency has ended. Instead of spending massive amounts of silicon on the CPU for advanced algorithms to improve instruction pre-fetch, smaller and simpler CPUs are used and there is room for more CPUs on the same silicon wafer. We got the Multi-Core CPU which practically means several CPUs on the same computer.

At first the cores of a Multi-Core CPU were simpler than the single core one. These cores also operated in a much lower frequency which meant that an application designed for a single task operation had a massive performance impact when moving to a new computer, for the first time ever.

Parallel Computing has become main stream. We started with a long series of lectures about parallel computing. It seemed that people wanted to know about this subject but there was so much overhead that Parallel Computing simply scared people away. There is a huge ramp before you can be a good parallel programmer. Just as there is for object oriented programming. This meant that team leaders and architects were at the same level as beginner programmers, or perhaps with some very little advantage. Add to this the fact that there are massive amounts of code already written for a single core CPU and good advantages can be achieved after at least some re-write. Last but most important reason to reject parallel computing was that it is easier and cheaper to buy another machine than to make the best out of the CPU cores. This was actually a boost for Cloud Computing.

Who is doing Parallel Computing

There are several types of parallel computing. The hardware is parallel so the Kernel is parallel. With this type of parallelism every worker is doing something else, and workers own their resources instead of sharing them. For a long while now DSP (Digital Signal Processing) chips are Multi-Core CPUs so that the algorithms executed on these chips can run faster. Algorithms and DSP chips are evaluated by MIPS which is the amount of instructions per time constant. Gaining performance increase with an algorithm means either using less instructions or adding more worker CPU cores. PCs also run algorithms such as face recognition, image detection, image filtering, motion detection, and more. The transition from single core CPU to a Multi-Core CPU was fast and simple.

Algorithm's increase in performance is relative to the amount of computations per data item. More computation more cores can be used. Image Blending (fade) is an example for an algorithm which cannot enjoy the use of more than a single core. Take an image and blend each pixel with the corresponding pixel of another image. Each pixel should be read from RAM then a simple addition and shift right are performed and then the result should be writen back to RAM. The CPU can operate at a rate of 3GHz and the RAM at 1GHz. For each pixel in the image we: Read pixel A, Read pixel B, Add, Shift, Write result pixel. Add another core and the CPU cores will mutually block on access to the memory. This is also true for Databases and database algorithms such as sort algorithms, linked lists, etc. For this reason the new Multi-Core CPUs have extensive support for parallel access to memory.

Parallel Computing ROI

Parallel Computing is the new future for computers. Object Oriented is no longer the new buzz word. I keep telling people that before they make an Object Oriented Design to their systems they should make flow charts. Good OOD is based on good system flow charts, whether you write them down or do it in your head as an art.

We all used to think that User Interface is the product and OOD is the way to do it. It now looks like we were wrong:

User Experience is the prodcut and Parallel Design is the way to do it. User Experience (UX) is not User Interface (UI). User Interface defines what the product would look like, or in other words UI defines what the product is. Object Oriented Design defines what the code looks like, or in other words OOD defines what the code is. Parallel Computing defines how the code works, or in other words Parallel Computing defines what the code does. User Experience defines how the application behaves, or in other words User Experience defines what the application does.

I am not using a C++ library because it is using linked-lists. I am using that library because it can sort.

I am not buying a product because it looks like I want it to look, for this I can buy a framed picture instead. I am buying a product because it is doing something I need and it is not doing what I do not need.

Parallel Computing is the basis for User Experience. Even if you have a single core it is better to have good parallel design. As customers you know this, you don't want to accidentally hit "Print" instead of "Save" and now wait for 5 seconds punishment for the dialog to open so you can close it. (see minute 43 for demo video)

Today we have so many good resources and tools. Now is the time to learn how to work parallel and produce good prodcuts with good UX.


Comments (7)
April 14, 2010 6:55 AM PDT


Peter da Silva I was doing parallel computing on single-CPU systems back in the late '70s and early '80s, without even thinking about it. It was mainstream. It was called the "UNIX command line". The UNIX pipes and filters model took advantage of parallelism on a single computer by allowing you to take advantage of parallelism inherent in teh division of work between I/O and computation. A UNIX pipeline allowed programs to accumulate and buffer data as fast as the disks could provide it, so that data was available for computation as soon as the CPU-intensive components of the pipeline were ready for it. When multiple CPUs became available, this just happened automatically.

For slow and latency sensitive devices, such as tape drives, one of the earliest tools for buffering I/O was simply to run the "DD" command with a large buffer multiple times in a pipeline: "tar cvf - | dd bs=16k | dd bs=16k | dd bs=16k > /dev/rmt0h" (this was on a PDP-11, 16k was a large buffer). The output of "tar" was uneven and bursty, because it was seeking all over the disk to collect the files for the archive, but the output of the final "dd" was smooth and the tape was able to stream for many megabytes at a time.

This had nothing to do with your proposed redefinition of parallel computing as a user experience design tool, it was a more or less automatic byproduct of good factoring of the problem. It was coarse-grained and could be bottlenecked by non-streaming operations (eg, sorts), but it was an early and effective tool. There have been similar tools created for specialized problem areas in GUI applications, such as MIDI apps that let you lay out multiple MIDI processing steps in two dimensions and hook them together by "wires", but the same kind of factoring of the problem space for GUI applications hasn't really been found.
April 14, 2010 8:34 AM PDT


Richard H. The image blending example only highlights the inherent non-parallel nature of memory-cpu bus contention. Current PCs with multi-cores aren't 100% parallel at the hardware level. ie. the Von-Neuman bottleneck is still present.
Lower your expectations, or get a system that really is parallel at the bus level.
April 14, 2010 8:35 AM PDT


Yves Daoust I don't quite share the comparison of parallel computing with object oriented design. I see the latter as a small step in the art of programming, as opposed to a giant leap for the former.

Anyone can write sequential programs after a few minutes of training on any procedural language. Most people end up writing well structured programs after a few years of practice and find no difficulty switching to Object Oriented Programming.

Writing concurrent programming is of another nature. It reserved for true experts, with a truly scientific understanding of the issues. Just think of the Dining Philosophers problem: even though the problem statement looks easy, I doubt that ordinary people can solve it correctly.

In fact, I consider that parallel programming is not within reach of ... the human brain, except in simple or symmetrical cases. As soon as there are two or three asynchronous agents, you lose the control :)
April 14, 2010 1:44 PM PDT


Thierry Joubert It is true that we see nowadays about as many conferences on Parallel Programmingin as we saw on OOP during the early 90's. From time to time, big actors have to convice the masses. Today, with Java and .NET, OOP has become the standard (try to give a C/C++ course to students if you are any doubt about this). The OOP "push" came from the software industry whose motivation was to provide efficient programming interfaces for programmable products like GUIs, Databases, system services, etc. OOP was a movement towards progress.

Parallelism is one of the oldest thing in computer science as stated in the article and several comments, but the Parallel Programming "push" we see nowadays is organized by silicon vendors who failed to keep up on the Moore's Law slope. OOP was not motivated by any limitation, and I see a noticeable difference here.
April 14, 2010 4:47 PM PDT


paul clayden Parallel is a fad and won't last. It's an interim measure to something much much bigger. Pretty soon we'll have analogue computing/quantum computing which is going to rock all our worlds.
April 14, 2010 8:11 PM PDT


Lava Kafle superb clarification, We have been using oparallelism in java oracle .Net CSharp whatever since very beginning of X64 Architectures supported by Intel
April 18, 2010 3:00 AM PDT

Asaf Shelly
Asaf Shelly Total Points:
1,930
Brown Belt
Hi All,

I will start with thanking Peter for the extensive information. Truly something to respect.

This shows us that the basic ideas were already there and where somehow lost in time. Makes me wonder what else did we forget.

Back at the old days applications and drivers usually had only a few components. These were separated by using different source files. Later in time we had a massive upgrade to use classes and objects as part of the Object Oriented programming and design. C programmers did not have to write down the Object Design whereas C++ programmers found it almost intuitive and mandatory. C programming also defines procedures. Notice the name "Procedure", it means that the function is not a 3 line variable modification code, rather it is a whole procedure in the main process. The flow chart was also too often not written down but as we can see by the names the application was a 'Process' to perform which had a 'main procedure' and several other 'procedures'. Old school programming defined Procedures and Structures, we now go back to Tasks and Objects. This is why my website (where the video is found) says " Welcome to the Renaissance"...

I was slowly getting to reply to Yves Daoust's: "In fact, I consider that parallel programming is not within reach of ... the human brain". See minute 12:30 in the same video mentioned at the end of the post. Everything we do is parallel. If you work as part of a big organization then you probably do Object Oriented Design and manage the programming tasks using SCRUM methodology. Take a look at SCURM, copy the principles to your code and you have a good parallel application. I quote Wikipedia ("http://en.wikipedia.org/wiki/Scrum_(development)") : "...the 'ScrumMaster', who maintains the processes..." There is also sprint, backlog, priority, and daily sync meeting which is used to profile the operation and keep track of progress. There are also interesting things to learn from it, for example the daily sync meeting is where you report of all problems. This means that we don't raise an exception for every problem, instead we collect all the errors and report when the time it right. This might solve a few problems that parallel loops are struggling with.
The " Dining Philosophers problem" is a way to manage a proposed solution – Locks, it is not a way to solve the problem. If instead of using a set of locks you use a service for each resource the problem is completely different.

Is the image here http://www.9to5mac.com/intel-core-i7-mac-pro-xserve the answer to Richard's question?

Hi Thierry, I could respectfully argue that OOP was motivated by the limitation in managing large scale projects just as parallel programming is motivated by managing large scale systems. OOP is for the design time and parallel programming is for the run time. Not that I don't agree with you. It is possible that OOP was focused on so much for the past few years that programmers today think only in objects but find it very difficult to think in tasks.

I guess I have to say to Paul that parallel programming is ignorant to the engine. I am suggesting you use a word-processor instead of a typewriter. It does not matter whether you are using MS-Office for Mac, Open-Office, or something new that will be invented 5 years from now. Quantum computing or not, my application should still know how to cancel an operation when it is no longer required.

Thanks for the comment Lava.

Regards,
Asaf Sphere: Related Content

22/4/10

10 momentos importantes en la historia de la Informática

Por: Federico Reggiani @ miércoles, 23 de septiembre de 2009

1) 1959 - COBOL

Para muchos COBOL es el lenguaje de programación más importante de la historia. Muchos lenguajes actuales están basados en él (Pascal, BASIC, etc.). La prueba más grande que ha superado es la del tiempo, dado que todavía hoy hay miles de ordenadores corriendo aplicaciones COBOL, 50 años después. No es que COBOL haga cosas que otros OS no pueden hacer, pero es que trabaja lo suficientemente bien como para no tener que actualizarlo.

2)1969 – ARPANET

Arpanet es, nada menos, que la red que está detrás de Internet. Fue concebida con fines científicos y hoy terminó siendo el medio de comunicación más importante. Sin dudas, ARPANET cambió nuestras vidas.

3) 1970 – UNIX

No digo Linux, digo UNIX. Este sistema operativo abrió la puerta a cosas como el uso de ordenadores por varias personas (Multi-user). Esto es algo normal hoy en día, pero no lo era en esos tiempos. Esto no solo se refiere a la clave que pones para que tu familia no vea la clase de “películas” que ves, sino que es la base para los sistemas de seguridad que permiten que usemos email, Facebook, Tuenti, etc.

4) 1976 – Apple I

Apple I fue el primer ordenador que lanzó Apple. Pero fue también el primero de uso personal que se haya fabricado. Con él nacen las “Personal Computers”, antes los ordenadores solo eran para universidades y científicos. Pero Steve Jobs había visto un futuro mucho más prometedor y democrático para los ordenadores.
WordStar

5) 1978 – WordStar

¿Usar un ordenador para tareas hogareñas o de pequeñas oficinas? Wordstar nació para CP/M (el D.O.S. original que Microsoft compró) en 1978. Luego lanzó su versión 3.0 para D.O.S en 1982. WordStar abrió las puertas a una nueva etapa para la informática. ¡Ya no era solo para científicos! Además, ayudó a muchos de nosotros a terminar la escuela gracias a trabajos preciosamente terminados, que luego imprimíamos con nuestra impresora de puntos.

6) 1978 – BBS

Los BBS fueron los primeros sistemas en darnos una actividad social en red. Podíamos enviar emails, ver ficheros que otros dejaban para que veamos, compartir imágenes, software, etc. Fueron el principio de lo que hoy hacemos con Internet, o al menos el principio de las redes sociales actuales.

7) 1983 – Microsoft Mouse

Microsoft no inventó el mouse (o ratón), lo compraron hecho (¿de dónde me suena esto?). Sin embargo, ha sido la culpable de que el uso de este dispositivo sea masivo. Usar un ratón en el año 83 era ciencia ficción. ¿Mover una flecha en la pantalla con la mano? ¡Una locura!

8) 1991 – Linux

Con Linux no solo nace un sistema operativo, nace también una revolución. Así como Apple revolucionó el mundo con el lanzamiento de un producto “científico” para las masas, Linus Torvalds hizo lo suyo al lanzar un producto que antes solo hacían grandes corporaciones. Todos sabemos el provecho que saca Microsoft a Windows, ellos dominaban la informática, decidían quién podía usar un ordenador y quién no. Gracias a Linus ahora todos podemos tener un sistema operativo abierto y gratuito. Con ventajas o no, pero lo que Linux provocó es innegable.

Sir Tim Berners-Lee

9) 1992 – WWW

Tim Berners-Lee inventó la Web. A Tim se le ocurrió nada menos que inventar el Hipertexto, o HTML. Imagina por un momento Internet sin la Web. Es difícil porque la Internet que nos viene a la cabeza SIEMPRE incluye la Web. Claro que el chat y el email también son Internet, pero la Web es determinante en nuestras vidas. Alguna vez Tim dijo que “si hubiese sabido que el HTML se iba a transformar en Amazon, lo hubiera patentado”. Un grande. Robert Cailliau también participó, fue el único que le prestó atención cuando Berners-Lee presentó los primeros bocetos de la Web. Un dato: la WWW se creó en el mismo sitio que el LHC, el CERN de Suiza.

10) 1998 – Google

El dominio fue registrado en 1997, pero Google vio la luz en 1998. No solo veo en Google el buscador, sino también la empresa que está detrás de miles de servicios y productos. Desde Gmail a AdSense. Pasando por un largo etcétera. Sin dudas la creación de Google es muy importante en esta historia. Sobre todo por la especulación de lo que le queda aún por hacer.

Luego hubieron cosas como el P2P, Facebook y cloud computing. Lo que sucede es que todavía no tenemos bien en claro qué es lo que van a ofrecer, que sea realmente determinante en la historia de la informática y que no termine solo en lo comercial.

Quedan muchas cosas fuera. Cosas como: C y los lenguajes modernos, Fortran, Windows 95, Seti@Home y el principio de la nube, Apple Lisa (el primero con Interfaz Gráfica), Apple Newton precursora de las Palm, iPhones y SmartPhones actuales, la primera portátil. Vemos muchas menciones a temas relacionados a Internet, pero es que es el acontecimiento más importante junto con el primer ordenador personal.

También quedo fuera la venta de D.O.S. a IBM, que abrió las puertas a los ordenadores compatibles que permitieron bajadas de precio notables haciendo más popular el uso de PC en nuestros hogares.

¿Te parece que falta algo?

¡Claro! ¡Windows y D.O.S.!

No están en la lista porque a pesar de haber sido enormes éxitos comerciales, y haber realmente cambiado la historia, no fueron los primeros en hacer lo que hacen. Cuando saló Google ya existía Yahoo, vale, pero Google nunca funcionó como Yahoo. Yahoo era actualizado pro personas, y Google fue el primero en hacer un crawler automático.

Los que leen mis escasas notas pueden ver que no soy un Linuxero ni un Apple fanboy (aunque uso Linux y tengo un iPhone).

Pero de verdad me parece que Windows y D.O.S. fueron algo que terminó por suceder por la evolución de otras cosas.




Comentarios:
Esto me recuerda un documental muy bueno que vi, habla de la evolución de INTERNET, de como empezo y el porque. Tiene unas bases muy buenas, muchas de ellas no habia oído hablar, pero despues de buscarlo lo corrobore. A mí me gusto, ¿Y a vosotros?
http://www.youtube.com/watch?v=FGxDIh7OLno Sphere: Related Content

9/1/10

Introduction to JSTL using NetBeans

Introduction
The latest version of JSTL is JSTL 1.1. Without any hesitation, JSTL is now extremely important in ensuring the success of the J2EE web application projects. JSTL is basically part of JSP 2.0 specification and requires Java Servlet 2.4 and higher to support its tags.

After completing this tutorial, you are expected to be able to apply JSTL technology to your JSP, know what are JSTL tags, know how and when to use certain tags under certain circumstances according to your needs.

Specifics Information on JSTL and Netbeans
This tutorial has been compiled, tested and run under:
1.Netbeans 5.5
2.JSTL 1.1 library package
3.Tomcat 5.5.7 as server

If you have installed NetBeans successfully, JSTL library (.jar) can be found on your local hard disk. It is bundled together with Netbeans. For your information, it can be found in: netbeans_installation_folder\enterprise1\config\TagLibraries\JSTL11

You can also download JSTL taglib library from Jakarta apache project online website on http://jakarta.apache.org/builds/jakarta-taglibs/releases/standard/binaries/. Some included jar for JSTL 1.1 library are jaxen-full.jar, jstl.jar, saxpath.jar, standard.jar, xalan.jar. However, only jstl.jar and standard.jar are required. So why do we need those other jar files? Well, standard.jar depends on other jars like xalan.jar, saxpath.jar, dom.jar, etc to work properly. You can use J2SE 1.4.2 and higher to avoid these dependencies. However, as the JSTL taglib library has been bundled together with the NetBeans, you do not need to download it anymore.

Roadmap
1. What is JSTL?
2. Why use JSTL?
3. Implementation of JSTL Core Tags
4. Implementation of JSTL Formatting Tags
5. Implementation of JSTL Function Tags
6. Conclusion
7. Appendix

1. What is JSTL?
JSTL stands for JSP Standard Tag Library. JSTL has been standardized and is being one of the most important technologies in implementing J2EE Web Application. The main objective of the JSTL is basically to simplify the Java codes within JSP (scriptlets) as well as to increase the level of reusability within our J2EE web application. Before JSTL is introduced, J2EE Web Applications (especially in the presentation layer – JSP) are extremely complex and are very tough to be maintained. It is true that the new developer may take some time to understand all the underlying codes within J2EE Web Application This is where JSTL should help.

Here is a simple JSTL flow concept; JSTL is compiled into a servlets (Java codes) before being displayed to JSP. Some classes of standard.jar are required to parse and translate these JSTL tags into servlets (Java codes). Lastly but not least, the servlet that has been compiled will be executed accordingly.

There are many more advantages of using JSTL compared to scriptlets. Therefore, it is recommended to replace scriptlets with JSTL in the presentation layer (JSP).

There are 5 major types of JSTL tags:
1. JSTL Core tags, prefixed with c
2. JSTL Format tags, prefixed with fmt
3. JSTL Function tags, prefixed with fn
4. JSTL Database tags, prefixed with sql
5. JSTL XML tags, prefixed with x

JSTL Core Tags
<< % @ taglib uri=http://java.sun.com/jsp/jstl/core prefix="c" % >>
Mainly used for replacement of scriptlet logical tags as well as basic URL handling tag such as catch, choose, if, forEach, param, when, redirect, import, url, etc.

JSTL Format Tags
<< %@ taglib uri=http://java.sun.com/jsp/jstl/fmt prefix="fmt" % >>
Mainly used for displaying number and date time format. This could be used for internationalization support as well. Tags examples are setLocale, setTimeZone, setBundle, formatNumber, formatDate, etc.

JSTL Function Tags
<< %@ taglib uri=http://java.sun.com/jsp/jstl/functions prefix="fn" % >>
Very useful JSTL tags. Most are used in conjunction with JSTL core tags. These tags are designed for manipulating string.

JSTL Database Tags
<< %@ taglib uri=http://java.sun.com/jsp/jstl/sql prefix="sql" % >>
Tags are used to interact with database level. With database tags you could do transaction, update and query the database from your UI level. Personally, I do not prefer these tags. The MVC design pattern should always be retained.

JSTL XML tags
<< %@ taglib uri=http://java.sun.com/jsp.jstl/xml prefix="x" % >>
Similar to core tags, except xml tags will deal with xml stuffs like parsing xml documents, validating xml documents, output an xpath and etc.

For depth details to all JSTL tags, you can find more information within your NetBean’s installation folder: installation_netbeans_folder\enterprise1\docs\

Additionally, JSTL accepts the conditional operators like ‘eq’, ‘ne’, ’==’, ’null’, ’empty’, ’not’, ’!=’, ’>=’, ’<=’, ’and’, ’&&’, ’or’, ’’ all are valid. Here is the mapping of relational and logical operators with JSP Notations. Operators JSP Notation: > gt, <>= ge, <= le, == eq, != ne , && and, or, ! not, ‘’ empty, / div, % mod .
While other arithmetic operators such as +, -, and * can also be used together with the JSTL tag as well. Sphere: Related Content

Developing Web Applications, Servlets and JSPs for WebLogic Server

Document Scope and Audience

This document is a resource for software developers who develop Web applications and components such as HTTP servlets and JavaServer Pages (JSPs) for deployment on WebLogic Server®. This document is also a resource for Web application users and deployers. It also contains information that is useful for business analysts and system architects who are evaluating WebLogic Server or considering the use of WebLogic Server Web applications for a particular application.

The topics in this document are relevant during the design and development phases of a software project. The document also includes topics that are useful in solving application problems that are discovered during test and pre-production phases of a project.

This document does not address production phase administration, monitoring, or performance tuning topics. For links to WebLogic Server documentation and resources for these topics, see Related Documentation.

It is assumed that the reader is familiar with J2EE and Web application concepts. This document emphasizes the value-added features provided by WebLogic Server Web applications and key information about how to use WebLogic Server features and facilities to get a Web application up and running.



Guide To This Document
■This chapter, Introduction and Roadmap, introduces the organization of this guide.
■Understanding Web Applications, Servlets, and JSPs, provides an overview of WebLogic Server Web applications, servlets, and Java Server Pages (JSPs).
■Creating and Configuring Web Applications, describes how to create and configure Web application resources.
■Creating and Configuring Servlets, describes how to create and configure servlets.
■Creating and Configuring JSPs, describes how to create and configure JSPs.
■Configuring JSF and JSTL Libraries, describes how to configure JavaServer Faces (JSF) and the JSP Tag Standard Library (JSTL).
■Configuring Resources in a Web Application, describes how to configure Web application resources.
■WebLogic Annotation for Web Components, describes how to simplify development by using annotations and resource injection with Web components.
■Servlet Programming Tasks, describes how to write HTTP servlets in a WebLogic Server environment.
■Using Sessions and Session Persistence, describes how to set up sessions and session persistence.
■Application Events and Event Listener Classes, discusses application events and event listener classes.
■Using the HTTP Publish-Subscribe Server, provides an overview of the HTTP Publish-Subscribe server and information on how you can use it in your Web applications
■WebLogic JSP Reference, provides reference information for writing JavaServer Pages (JSPs).
■Filters, provides information about using filters in a Web application.
■Using WebLogic JSP Form Validation Tags, describes how to use WebLogic JSP form validation tags.
■Using Custom WebLogic JSP Tags (cache, process, repeat), describes the use of three custom JSP tags—cache, repeat, and process—provided with the WebLogic Server distribution.
■Using the WebLogic EJB to JSP Integration Tool, describes how to use the WebLogic EJB-to-JSP integration tool to create JSP tag libraries that you can use to invoke EJBs in a JavaServer Page (JSP). This document assumes at least some familiarity with both EJB and JSP.
■web.xml Deployment Descriptor Elements, describes the deployment descriptor elements defined in the web.xml schema under the root element .
■weblogic.xml Deployment Descriptor Elements, provides a complete reference for the schema for the WebLogic Server-specific deployment descriptor weblogic.xml.
■Web Application Best Practices, contains Oracle best practices for designing, developing, and deploying WebLogic Web applications and application resources. Sphere: Related Content

14/7/09

GNU/Linux Bride

En este artículo quiero demostrar que GNU/Linux es más que un sistema operativo y te puede ayudar hasta con tu vida personal. Por ejemplo, a buscarte la novia perfecta y todo lo que viene después, que no es poco…

Espero que no haga falta decir que es un artículo de humor y bastante geek xD
bueno difrutenlo xDDD

Empezaremos por buscar una novia
$ aptitude search novia

Nos la quedamos
$ aptitude install novia

Ojeamos sus atributos
$ stat novia

La comparas con otra a la que tanteabas desde hace una semana
$ cmp novia la_otra

Compruebas si hay conexión
$ ping novia

Y cómo es dicha conexión
$ netstat

Sí, definitivamente nos la quedamos. Nos aseguramos que sepa que somos su novio…
$ chown yo novia

…y de que no nos pueda poner los cuernos…
$ chmod 700 novia

La moldeamos a nuestro gusto
$ cat 95-60-90 >> novia

Nos vamos a una fiesta en casa de unos colegas y tenemos un calentón. Nos vamos a un lugar aislado, oculto
$ cd .dormitorio_padres_colega

Comprobamos que no hay nadie más
$ ls -a

Buscamos el fastidioso enganche del sujetador…
$ grep 'enganche_sujetador' novia

Al lío. Conectamos nuestros cuerpos de la forma tradicional
$ ssh yo@novia

Si alguien quiere un 69 (u otro “slot”), tan sólo hay que decírselo
$ ssh -p 69 yo@novia

Dejamos nuestra semilla
$ wget http://yo.com/semilla

Y separamos nuestros cuerpos
$ exit

Ordenamos y limpiamos un poco el cuarto
$ clear

Poco después te enteras de que metiste el penalti (¡Maldito preservativo!). Nueve meses más tarde tu novia da a luz
$ tar -xzvf novia.tar.gz

Por las presiones de las familias, decides formalizar la relación y os casáis. Fundáis una familia con todo lo que ello conlleva…
$ addgroup familia
$ adduser novia familia
$ adduser hijo familia
$ alias parienta="novia"
$ alias crio="hijo"

Haces una nota mental para acordarte de este “”"”maravilloso día“”"” y que no se te olvide, todo ello para evitar que no te ponga a parir la parienta
$ crontab -e

Todo va bien hasta que inexplicablemente un día se le cruzan los cables a la parienta y mata a vuestro hijo
$ pkill hijo

Meditas sobre todo lo que ha sucedido y sobre las mujeres
$ man mujeres

Y efectivamente llegas a una sabia conclusión
$ No existe entrada de manual para mujeres

Pese a que te aseguraste de que no tendrías cuernos, si ambos sois de una raza y el crío de otra, o tienes otro tipo de confirmación, vete buscando a un tal root…
$ find / -name root

* Espero que las lectoras féminas entiendan el toque de humor, incluidas las pinceladas ¿machistas? xD

* Ningún menor de edad ha sido dañado en el desarrollo de este relato.


fuente: http://tuxpepino.wordpress.com/2007/...a-en-gnulinux/ Sphere: Related Content

14/2/09

15 Top Open-source Tools for Web Developers

by Sam Dean - Feb. 12, 2009Comments (8) | Trackback URL

Recently, we covered research showing that nearly half of open source developers are focused on applications for delivery in the cloud. Software as a Service (SaaS) applications are increasingly either employing open source or are built entirely on it. And all of this adds up to an increasing premium on web development skills and good tools for web development in the open source community. The good news is that there are many open source tools to help you with your web project, and given the costs of web development environments and the like, they can save you a lot of money. Here are over 15 good examples of tools and tutorials, with a few that we've covered before appended at the end, in case you missed them.

Sphere: Related Content