Tuesday, September 23, 2014

To Preserve or Not To Preserve: Future of Stem Cell Research

The banking or preservation of stem cells such as from umbilical cord blood or bone marrow has been increasing, as the potential for using stem cells in clinical applications continues to fuel speculation and expectations. Researchers have been steadily exploring various ways to use stem cells at the optimum level, what types to use and how to deliver them to the body — findings that are not only transformational, but also progressive and pragmatic.
Preliminary and promising research through clinical and experimental trials suggests that stem cells may be able to treat autoimmune diseases such as type 1 diabetes, Parkinson’s disease, brain and spinal-cord injuries, cardiovascular disease, liver disease, kidney disease, and breast cancer, among other illnesses. Either donated or stored privately, cord-blood banking is proving to be a rich source of life-saving treatment — now and in the future, as the possibilities of cord blood continue to expand.
“Initial studies suggest that stem cell therapy can be delivered safely,” said Dr. Ellen Feigal, Senior Vice President of research and development at the California Institute of Regenerative Medicine, which was awarded more than $2 billion toward stem cell research since 2006 and is enrolling patients in 10 clinical trials this year. In addition to continuing safety research, Dr. Feigel added, “Now what we want to know is will it work, and will it be better than what’s already out there?”
On the other hand, Dr. Charles Murry, Co-director of the Institute for Stem Cell and Regenerative Medicine at the University of Washington, believes that it is important to note that very few therapies beyond bone marrow transplants have been shown to be effective. Websites, newspapers and magazines advertising stem cell therapies leave the impression among the masses that such treatments are ready to use and that “the only problem is the evil physicians and government, who want to separate people from lifesaving therapies,” he said.
Scientists are now exploring direct therapies in new and innovative ways, such as reproducing and studying diseases in a dish using cells created from patients with specific ailments. Kevin Eggan, associated with the Harvard Stem Cell Institute, is using this technique to study amyotrophic lateral sclerosis, or Lou Gehrig’s disease. He began his work five years ago when he took skin cells from two women dying from the same genetic form of ALS. He turned these skin cells into stem cells and then into nerve cells, and discovered an electrical problem—the cells weren’t signaling to one another properly, which he theorized was probably causing the neural degeneration that characterizes ALS.
After replicating these nerve cells multiple times and testing various drug compounds to see which would correct the electrical signaling problem, he found a candidate drug — an existing medication approved for epilepsy — that will be tested in ALS patients as soon as the end of this year.
Increasingly, though, companies are competing with medical institutions offering stem cell harvesting and preservation services. Parents seeking to preserve stem cells for their children are turning to the harvesting and preservation of umbilical cord blood as a source of stem cells. Further, the harvesting of stem cells from adults for their future use in cellular therapeutic applications and/or tissue engineering needs has emerged as a growing facet of the stem cell market. Even further, companies are emerging that offer patients the opportunity to clone their own line of embryonic stem cells.
The real medical challenge is, however, to uncover which type of cell therapy best addresses each particular medical condition. With numerous experiments and huge amounts of money involved, scientists and researchers have yet to come up with the most cost-effective ways to deliver stem cells.
For our relevant BCC Research stem cell report, visit the following link:



Bookmark and Share

Friday, September 19, 2014

THE BIG BANG THEORY OF WEARABLE TECHNOLOGY

The impending explosion of the wearable computing market is one of the most interesting and highly anticipated developments taking place in the high-tech industry. The $3 billion wearable consumer market is intrinsically linked with the $240+ billion smartphone market. The key market driver for wearable computing is the soaring global popularity of smartphones from manufacturers including Apple, Samsung Electronics, LG Electronics, HTC, Blackberry, Nokia and Microsoft.

The burgeoning field of wearable technology is hitting the mainstream and one of the highlights of high-tech wearable devices is that they are getting smaller, faster, cheaper, and more powerful with every new product. The computing power of an Electronic Numerical Integrator And Computer or ENIAC a decade ago can now be easily fitted inside a chip in a musical greeting card. Similarly, the smartphones today are more powerful than the PCs used, say, five years ago. Now, all the capabilities of a smartphone like making calls, taking pictures, connecting to the internet, video chats, and so on, are being condensed into smartwatches—practically everything a phone or a tablet can do.

If the growing trend in the wearable computing industry is to be believed, the time may soon come when phones and tablets are a thing of the past. Google Glass is a perfect example. The product is still under development, but if everything goes as planned, consumers will soon have no need for their standard smartphone. Google Glass will be able to easily respond to verbal commands, augmented by the occasional manual interaction via controls located directly on the frame. There has even been talk about eventually including a laser-projected virtual keyboard for those times when voice just isn’t enough. With the ability to access countless sources of information in seconds and then relay them to a miniature screen situated in the upper corner of the wearer’s vision field, Google Glass makes 4G internet connectivity features seem archaic.

Motorola recently entered the ring with its Moto 360 smartwatch, which is primarily voice operated and can easily display messages and reminders on command. The result is a small, stylish accessory that serves as an assistant, calendar, and phone all at once and completely replaces the smartphone.

However, Apple's stated entry into the smartwatch arena last week with a device that won't go on sale until early 2015 raises questions: Can the company work its magic as it has in the past and convince people that they really need a smartwatch —or will this time be different? Referring to its much awaited product of the year, iWatch, Apple CEO Tim Cook said in a press release, “Apple introduced the world to several category-defining products, the Mac, iPod, iPhone and iPad. And once again Apple is poised to captivate the world with a revolutionary product that can enrich people's lives. It's the most personal product we've ever made.”

In fact, the “wearable category” covers almost everything from Fitbit's $99 Flex fitness tracker and Nike's $99 Fuelband fitness monitors to Samsung's $199 Galaxy Gear smartwatch. In January 2014, Washington-based Innovega revealed its latest effort in introducing a wearable computer in the form of contact lenses at the CES trade show held in Las Vegas, USA—iOptik. Synchronizing its operations with the human eye, the iOptik uses its lenses to project an image of apps and information through the wearer’s pupil and onto the back of the retina. The lenses superimpose one upon the other to produce an image overlaid with information. The product is yet to be given approval by the US Food and Drug Administration (FDA); however, the company plans to schedule further operations later this year or early next year.

Wearable computing concept is evolving to be even more personal, and not just for the benefit of the wearer. Expectant mothers, in the near future, will wear electronic “tattoos”—smartsensing stickers that will monitor fetal heart rate and brain waves, detect early signs of labor, and even notify the doctor directly when it’s time to go to the hospital.

Wearable computing devices have potential benefits for any situation where information or communication is desired, and the use of a hands-free interface is considered beneficial or essential. In addition to consumer products, many industry-specific applications in markets such as defense, healthcare, manufacturing and mining are also emerging.

The growth of the consumer market for wearables largely depends on how rapidly existing smartphone users will adopt wearable accessories and alternative devices. With new and improved innovation hitting the global market every day, only time will reveal whether wearables will ultimately replace smartphone technology in many consumer environments.

For our relevant report on wearable technology, visit the following link:
http://www.bccresearch.com/market-research/information-technology/wearable-computing-ift107a.html
Bookmark and Share

Wednesday, September 10, 2014

The Future of Multi-Touch Technology is Right Here, Right Now

Touch screen-based interactivity has rapidly progressed from being a desired feature to an almost mandatory requirement for displays utilized in various types of equipment. Vending machines, home appliances, vehicle control consoles and industrial instruments increasingly feature a touch screen. The evolution of human-machine interfaces (HMIs) and computer interfaces (HCIs) is proceeding with simple button on/off controls giving way to advanced gesture-based screen interaction requiring so-called multi-touch operation.
The multi-touch technology revolution essentially began in the year 1982 when the Input Research Group at the University of Toronto, Canada, developed the first human input multi-touch system. Frosted glass panel was used with a camera placed behind the glass. As a result, when a finger or several fingers touched the glass on the otherwise white background, the camera would detect it as an action, thereby registering it as an input. Additionally, the system was pressure sensitive since the size of the dot depended on how hard the person was pressing the glass.

In 2005, Jefferson Han’s presentation of a low-cost, camera-based sensing technique using Fourier transform infrared spectroscopy (FTIR) truly highlighted the potential role the technology could play in developing the next generation of human/computer interfaces. Han’s system was cheap, easy to build, and was used to illustrate a range of creatively applied interaction techniques.

In 2007, Apple Inc. changed the face of consumer-electronics market with the release of iPhone, a mobile phone with a multi-touch screen as user interface. The iPhone’s interface and interaction techniques received considerable media attention, paving the way for numerous companies flooding the market with similar products since then. Later that same year, Microsoft announced their Surface multi-touch table, which had the appearance of a coffee table with an embedded interactive screen. Cameras were fitted inside the table that captured reflections of hands and objects as inputs. By employing a grid of cameras, the Surface has a sensing resolution sufficient to track objects augmented with visual markers.

At last year’s CES, 3M debuted its larger-than-life 84-inch Touch System. This “touch table” supports 4K and is currently demonstrating its abilities at Chicago’s Museum of Science. There are reports that a 100-inch version is under development. Multi-touch display technology holds great promise for future product development. By focusing on simplicity in the manufacturing process, cost efficiencies, and effectively using existing technologies, Lemur music controller came into existence— believed to be the world’s first commercial multi-touch display product to market in a time span of only three years.
Undoubtedly, multi-touch technology has reshaped the ways in which we interact with the digital world on a daily basis. As consumer technology continues to evolve, there’s no telling what the future might hold. From smartphones to tablets, multi-touch devices have become a routine part of our everyday lives. Multi-touch PC experiences are well on their way, and Ractiv’s Touch+ is one of many; launched this August. Touch+ by Ractiv enables users to utilize or any flat surface as a controller for their desktop or laptop, similar to that of an iPad or other tablet device. By utilizing the technology that detects a user’s hand movements, Touch+ effectively removes the necessity for a traditional mouse or trackpad, and simulates the experience of using a tablet or touch-screen device on a desktop or laptop.
Multi-touch technology combined with surface computing is radically transforming our relationship with computers.  Films like Minority Report, The Matrix: Revolutions, District 9, and Quantum of Solace have all included multi-touch interfacing in their predictions for the future, a future we are already beginning to experience today.  One of the most important technological advances of the past five years has been about the interface. As new and improved gadgets become capable of an ever-expanding variety of functions, consumers are equally thinking more creatively about how they interact with them. Usability is a huge priority in technology design. As a result, the world's leading technology manufacturers are investing millions of dollars into making their devices easier to control.
For our relevant report on multi-touch technology, visit the following link:

Bookmark and Share

Friday, September 5, 2014

Redefining 3d Printing Technology through Innovation


Three-dimensional (3D) printing, also called additive manufacturing, is the process of making three-dimensional solids from a digital model by depositing successive layers of material in different shapes. In the last few years, this technology has taken the world of trade and commerce by storm, most notably, retail, traditional manufacturing, automotive, aviation, finance, construction, and electronics. BCC Research estimated the total 2013 global market for 3D printing materials to be worth $245 million. This figure is expected to rise to $285 million in 2014 and $650 million in 2019, a CAGR of 17.9% over the next five years.

The rate of development and the increasing popularity of 3D printing is astounding; continuously pushing its limits to the extreme. Every day new breakthroughs are being achieved, and ideas which seemed impossible only a few short years ago are becoming commonplace. To an extent that NASA has decided to take this emerging technology beyond stratosphere: into space. With the size of a small microwave, the printer is only proof of concept that printing in zero gravity can create objects that are as accurate and as strong as those produced by a printer on Earth. The objective of this project is to create a machine shop for astronauts in space. Astronauts will no longer have to be dependent on next resupply mission on gravity well; they can create the needed parts right onboard!

The printer is scheduled to be lofted into low-Earth orbit on a SpaceX-4 space shuttle this September. If all goes well with this experiment, then NASA will move on to a more elaborate next-generation printer called the Additive Manufacturing Facility later this year.

One area which will need to advance before complex portable electronics are fabricated through additive manufacturing is that of battery manufacturing. Although, 3D printing of a battery is not a new concept, a 3D printed graphene-based battery could be a game changer for several industries. According to the Vancouver based company, Graphene 3D Lab Inc., batteries which are based on the super material known as graphene, a single layer of carbon atoms, could outperform even some of the best energy storage devices on the market today. The ability to 3D print a battery allows for custom shapes to be introduced into the world of electronics where companies are trying to cram as many components into the smallest space possible.

Scientists and researchers are now turning to a field where innovation saves lives. While printing of complete organs for transplants may be decades away, experts in Monash University in Melbourne, Australia, have developed highly-realistic 3D-printed body parts to allow trainee doctors to learn human anatomy without needing access to a real cadaver.
"Our 3D printed series can be produced quickly and easily, and unlike cadavers they won't deteriorate - so they are a cost-effective option too," said Paul McMenamin, Director of the University's Centre for Human Anatomy Education.
3D printing technology is being used to manufacture a wide array of items – from auto parts and prototypes to human skin and organs. In a world where mass-manufacturing takes place on scales never seen before, 3D printing is starting to spell big changes for the way the world thinks about production. This inevitably means new frontiers in global trade will be opened as well.

For our relevant report on 3D printing, visit the following link:


References:
·        


Bookmark and Share