Wednesday, May 4, 2011

New 3D TVs




Researchers at MIT's Media Lab announced on Wednesday that they have developed a new approach to glasses-free 3D technology.

The team said they could double the battery life of devices like Nintendo's 3DS portable gaming system without compromising screen brightness or resolution.

The researchers also said that their technique would expand the viewing angle of a 3D screen.

According to Doug Lanman, a postdoc in Associate Professor Ramesh Raskar's Camera Culture Group at the Media Lab, Nintendo's 3DS relies on an older technology known as parallax barrier. This requires two versions of the same image, both of which are sliced into vertical segments and interleaved on a single surface.

The team's HR3D system uses two layers of liquid-crystal displays. The top LCD displays a pattern customized to the image beneath it.

This top layer consists of thousands of tiny slits, whose orientations follow the contours of the objects in the image. The slits are oriented in so many different directions that the 3D illusion is consistent no matter whether the image is upright or rotated 90 degrees. Lanman said in a statement that if a device like the 3DS used HR3D then its battery life would be longer because the parallax barrier would block less light. The 3D image would also be consistent no matter the viewing angle.

“The great thing about Ramesh’s group is that they think of things that no one else has thought of and then demonstrate that they can actually be done,” Neil Dodgson, professor of graphics and imaging at the University of Cambridge in England, said in a statement.

"It’s quite a clever idea they’ve got here.”

However, Dodgson said that HR3D is very computationally intensive.

“If you’re saving battery power because you’ve got this extra brightness, but you’re actually using all that battery power to do the computation, then you’re not saving anything,” he says.

I will be waiting for 4D, now.

Tuesday, May 3, 2011

Cyborgs?


And now for some more recent technological developments...

A bionic leg is currently in the process of being constructed to help those who have lost their limbs, Reuters reports.

The US Army is sponsoring a clinical trial which is developing the technology that allows a bionic leg to function. Ideally, the leg will be able to move in accordance with the patient's muscles and nerves after learning the individual's nerve signal patterns.

"We're really integrating the machine with the person,” says Levi Hargrove to Reuters. Hargrove, who is leading the research, is from the Rehabilitation Institute of the Center for Bionic Medicine in Chicago.

"The way most prosthetics work now is you have mechanical sensors. You have to push and interact with them," Hargrove added.

"With this, you measure the actual neural intent and have that tell the motor what to do."

The bionic leg will give leg amputees mobility and more freedom in their daily lives. Meanwhile, prosthetic arms have already been developed with similar technology.

In essence, artificial limbs are being created that function according to the owner's brainwaves and nerve impulses. Once this technology is put into use and improved upon, will we have a real-life Terminator scenario? Seems like something out of a dream.

Monday, May 2, 2011

Colonizing the Moon



In just the past week, Congress has introduced a bill directing NASA to put a manned base on the moon by 2022, and SpaceX founder Elon Musk has said that he'll be sending humans to Mars in a little as 10 years. But can it happen, and do we even want it to?

The "Reasserting American Leadership in Space Act" would tell NASA to "develop a sustained human presence on the moon in order to promote exploration, commerce, science and United States preeminence in space as a stepping stone for the future exploration of Mars and other destinations," all by 2022. That sounds good in theory, but the bill basically just says, "Hey, go do this," without recognizing that it may be both a technologically and fiscally impossible task for NASA to accomplish within that time frame.

Private industry is rapidly catching up to NASA. In the next ten years especially, the space agency seems likely to get eclipsed after the impending retirement of the space shuttle. SpaceX might have the credentials to back up its space exploration plans, which would put humans on Mars in a decade if everything goes well. That's a big if, though, since SpaceX still has a lot of work to do to get its Falcon heavy-lift rocket operational by 2012.

Sunday, May 1, 2011

Cars of the Future



A team at the National Institute of Natural Sciences in Okazaki, Japan is working on a way to use a laser to replace the spark plug in motorized vehicles with internal combustion engines.

If successful, we may be driving with lasers in our cars rather than spark plugs and, in the process, may better fuel efficiency.

Spark plugs, which are electrical devices located in the cylinder head of some internal combustion engines, waste a bit of gasoline with each cycle of the cylinders as it ignites compressed fuels, such as gasoline.

However, lasers will eliminate that waste, giving us better miles per gallon (liter) with our cars.

The biggest hurdle with this new technology is placing a powerful laser into a very small volume and making sure it doesn’t impact the rest of the engine.

Full article: http://www.bbc.co.uk/news/science-environment-13160950

Technology is, as usual, on the move. Would you drive a more environmentally-friendly car if it cost you more money? Do you? What are some good ideas for cars of the future?

The Technological Singularity

Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.

Is such progress avoidable? If not to be avoided, can events be guided so that we may survive? These questions are investigated. Some possible answers (and some further dangers) are presented.

The acceleration of technological progress has been the central feature of this century. I argue in this paper that we are on the edge of change comparable to the rise of human life on Earth. The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence. There are several means by which science may achieve this breakthrough (and this is another reason for having confidence that the event will occur):
  • There may be developed computers that are "awake" and superhumanly intelligent. (To date, there has been much controversy as to whether we can create human equivalence in a machine. But if the answer is "yes, we can", then there is little doubt that beings more intelligent can be constructed shortly thereafter.)
  • Large computer networks (and their associated users) may "wake up" as a superhumanly intelligent entity.
  • Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent.
  • Biological science may provide means to improve natural human intellect.
The first three possibilities depend in large part on improvements in computer hardware. Progress in computer hardware has followed an amazingly steady curve in the last few decades. Based largely on this trend, I believe that the creation of greater than human intelligence will occur during the next thirty years. (Charles Platt has pointed out that AI enthusiasts have been making claims like this for the last thirty years. Just so I'm not guilty of a relative-time ambiguity, let me more specific: I'll be surprised if this event occurs before 2005 or after 2030.)

What are the consequences of this event? When greater-than-human intelligence drives progress, that progress will be much more rapid. In fact, there seems no reason why progress itself would not involve the creation of still more intelligent entities -- on a still-shorter time scale. The best analogy that I see is with the evolutionary past: Animals can adapt to problems and make inventions, but often no faster than natural selection can do its work -- the world acts as its own simulator in the case of natural selection. We humans have the ability to internalize the world and conduct "what if's" in our heads; we can solve many problems thousands of times faster than natural selection. Now, by creating the means to execute those simulations at much higher speeds, we are entering a regime as radically different from our human past as we humans are from the lower animals.

From the human point of view this change will be a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control. Developments that before were thought might only happen in "a million years" (if ever) will likely happen in the next century.

I think it's fair to call this event a singularity ("the Singularity" for the purposes of this paper). It is a point where our old models must be discarded and a new reality rules. As we move closer to this point, it will loom vaster and vaster over human affairs till the notion becomes a commonplace. Yet when it finally happens it may still be a great surprise and a greater unknown. In the 1950s there were very few who saw it: Stan Ulam paraphrased John von Neumann as saying:

One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.

Von Neumann even uses the term singularity, though it appears he is thinking of normal progress, not the creation of superhuman intellect. (For me, the superhumanity is the essence of the Singularity. Without that we would get a glut of technical riches, never properly absorbed.)

In the 1960s there was recognition of some of the implications of superhuman intelligence. I. J. Good wrote:


Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the _last_ invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. ... It is more probable than not that, within the twentieth century, an ultraintelligent machine will be built and that it will be the last invention that man need make.

Good has captured the essence of the runaway, but does not pursue its most disturbing consequences. Any intelligent machine of the sort he describes would not be humankind's "tool" -- any more than humans are the tools of rabbits or robins or chimpanzees.

Through the '60s and '70s and '80s, recognition of the cataclysm spread. Perhaps it was the science-fiction writers who felt the first concrete impact. After all, the "hard" science-fiction writers are the ones who try to write specific stories about all that technology may do for us. More and more, these writers felt an opaque wall across the future. Once, they could put such fantasies millions of years in the future. Now they saw that their most diligent extrapolations resulted in the unknowable ... soon. Once, galactic empires might have seemed a Post-Human domain. Now, sadly, even interplanetary ones are.

What about the '90s and the '00s and the '10s, as we slide toward the edge? How will the approach of the Singularity spread across the human world view? For a while yet, the general critics of machine sapience will have good press. After all, till we have hardware as powerful as a human brain it is probably foolish to think we'll be able to create human equivalent (or greater) intelligence. (There is the far-fetched possibility that we could make a human equivalent out of less powerful hardware, if we were willing to give up speed, if we were willing to settle for an artificial being who was literally slow. But it's much more likely that devising the software will be a tricky process, involving lots of false starts and experimentation. If so, then the arrival of self-aware machines will not happen till after the development of hardware that is substantially more powerful than humans' natural equipment.)

But as time passes, we should see more symptoms. The dilemma felt by science fiction writers will be perceived in other creative endeavors. (I have heard thoughtful comic book writers worry about how to have spectacular effects when everything visible can be produced by the technologically commonplace.) We will see automation replacing higher and higher level jobs. We have tools right now (symbolic math programs, cad/cam) that release us from most low-level drudgery. Or put another way: The work that is truly productive is the domain of a steadily smaller and more elite fraction of humanity. In the coming of the Singularity, we are seeing the predictions of _true_ technological unemployment finally come true.

Another symptom of progress toward the Singularity: ideas themselves should spread ever faster, and even the most radical will quickly become commonplace. When I began writing science fiction in the middle '60s, it seemed very easy to find ideas that took decades to percolate into the cultural consciousness; now the lead time seems more like eighteen months. (Of course, this could just be me losing my imagination as I get old, but I see the effect in others too.) Like the shock in a compressible flow, the Singularity moves closer as we accelerate through the critical speed.

And what of the arrival of the Singularity itself? What can be said of its actual appearance? Since it involves an intellectual runaway, it will probably occur faster than any technical revolution seen so far. The precipitating event will likely be unexpected -- perhaps even to the researchers involved. ("But all our previous models were catatonic! We were just tweaking some parameters....") If networking is widespread enough (into ubiquitous embedded systems), it may seem as if our artifacts as a whole had suddenly wakened.
And what happens a month or two (or a day or two) after that? I have only analogies to point to: The rise of humankind. We will be in the Post-Human era. And for all my rampant technological optimism, sometimes I think I'd be more comfortable if I were regarding these transcendental events from one thousand years remove ... instead of twenty.

Laminin: The God Moloecule?

Laminins are major proteins in the basal lamina (formerly improperly called "basement membrane"), a protein network foundation for most cells and organs. The laminins are an important and biologically active part of the basal lamina, influencing cell differentiation, migration, adhesion as well as phenotype and survival.



Laminins are trimeric proteins that contain an α-chain, a β-chain, and a γ-chain, found in five, four, and three genetic variants, respectively. The laminin molecules are named according to their chain composition. Thus, laminin-511 contains α5, β1, and γ1 chain.

Fourteen other chain combinations have been identified in vivo. The trimeric proteins intersect to form a cross-like structure that can bind to other cell membrane and extracellular matrix molecules. The three shorter arms are particularly good at binding to other laminin molecules, which allows them to form sheets. The long arm is capable of binding to cells, which helps anchor organized tissue cells to the membrane.

The laminins are a family of glycoproteins that are an integral part of the structural scaffolding in almost every tissue of an organism. They are secreted and incorporated into cell-associated extracellular matrices. Laminin is vital for the maintenance and survival of tissues. Defective laminins can cause muscles to form improperly, leading to a form of muscular dystrophy, lethal skin blistering disease (junctional epidermolysis bullosa) and defects of the kidney filter (nephrotic syndrome).

What is bizarre about Laminin; however, is its cross-like shape. Christians use that as proof of God's evidence everywhere, though it's just a tiny, cross-shaped protein.

What do you think?

Saturday, April 30, 2011

OLED: The Screen of the Future


In a few years, when people ask, "What's on the tube tonight?" they might be making an unintentional pun. That's because researchers have created a new transistor based on carbon nanotubes that could soon light up televisions and other screens. 

Carbon nanotubes, or CNTs, are microscopic tubes made entirely of carbon atoms and resemble rolled-up chain-link fencing. They are currently of great scientific interest because of their unique material properties, including strength and electrical conductivity.

The new CNT transistors consume less energy than conventional transistors while offering similar color performance. The finding might pave the way for larger, sharper screens based on organic light-emitting diodes, or OLEDs, which offer key advantages over other types of displays, researchers say.

"Our device opens up a whole new realm of materials that could solve this size limitation problem," among other OLED issues, said Andrew Rinzler, a professor of physics at the University of Florida and a co-author of a paper appearing today in the journal Science.

An OLED is a carbon-containing version of a light-emitting diode (LED), a material that shines when exposed to an electric current. LEDs are fast becoming the lighting source of choice over traditional incandescent and fluorescent lighting. They also find use as backlighting in so-called LED TVs, a subset of liquid crystal display (LCD) sets, which along with plasma screens have largely replaced old-school cathode ray tubes and rear-projection TVs over the last decade.

Building a better TV screen 

But with television screens, it always seems you can do one better, and OLEDs might well be the future of the ever-popular electronic.

For starters, OLEDs use about half as much energy as plasma and LCD screens. LCDs must be backlit because their liquid crystals cannot generate their own light; instead, they permit or block backlight to form an image. In OLEDs, however each pixel can shine on its own. Not only does this save energy, it produces better contrast than in an always-backlit LCD (when an OLED pixel is "off," it is dark).

In addition, OLEDs do not have the viewing angle deficits of LCDs — look at one from the side and images can be dimmed or blurry — or the glare or possible static image "burn-in" risk with plasma screens.
Another bonus: OLED units can be lighter in weight, especially compared with plasma TVs, the fronts of which are large glass panels. Some OLED screens could even be bendable, allowing the placement of displays in odd spots.

Yet OLEDs have their own problems. They rely on high voltages to make light, which eats into the screens' lifetime. Due to the difficulty of manufacturing conventional transistors uniformly, OLED screens have been limited in the size department as well. Most OLEDs today serve merely as tiny cellphone displays, and about 30 percent of OLED cellphone screens end up being scrapped due to defects, Rinzler noted.

Carbon nanotubes to the rescue 

The new transistors developed by Rinzler and his colleagues address these issues that have dogged OLEDs. The transistors, for instance, do just fine on lower voltages, and can still produce bright light output of the three primary colors — red, green and blue — needed to render images.

The transistors have a vertical architecture, unlike the lateral, silicon-based transistors used in standard OLED displays. The carbon nanotubes enabled this stacked design, allowing the device to act as both a transistor driving electrical current and as a light emitter. Combining these normally separate components, and doing away with the need for other accessory parts, such as a capacitor, "could be a fairly significant benefit for manufacturers," Rinzler told TechNewsDaily.

Yang Yang, a professor of materials science and engineering at the University of California, Los Angeles who was not involved in the study, described the new transistors as "very smart."
"This is great work with very encouraging results," Yang said, "which can have significant impacts in future OLED displays."

Future development will now involve building arrays of the pixel-generating devices. The researchers also will scale down the pixels; the experimental transistors produced pixels about 1 millimeter square, while commercial, rectangular screen pixels are in the neighborhood of 300 micrometers by 200 micrometers.
Rinzler said he could see TVs on the market using the technology in as little as two to five years, "depending on how quickly large display manufacturers recognize this as an important solution for what they need."

Example: http://www.youtube.com/watch?v=j3MGJ6qV30U

All videos and text excerpts belong to their respective owners.