Tuesday, November 26, 2013

Data Deduplication – A Perfect Tool for Database Management

Efficient management and storage of data is usually a problem that most organizations face these days. There are various methods and technologies which are in place to solve this issue. The amount of space available for storage must be used efficiently so as to store maximum data in minimum space. Data Deduplication is a method which looks for repetition or redundancy in sequences of bytes over a large collection of data. The first uniquely stored version of a data sequence is referenced at further points than be stored again. Data deduplication is also known as intelligent compression or single-instance storage method.

File Level Deduplication

In its most common form, deduplication is done at the file level. It means that no file which is identical is stored again and this is done by filtering the incoming data and processing it so as to avoid repeated storage of the same file unnecessarily. This level of deduplication is known as single-instance storage (SIS) method. Another level of deduplication occurs at block level, where blocks of data that are similar in two non identical files, are identified and only one copy of the block is stored. This method frees up more space than the former, as it analyzes and compares data at a deeper level.



Target Level Deduplication

The second type of implementation is at the target level which is the backup system.The deployment is easier compared to the first source type. There are two types of implementation – inline or post process. In inline implementation the deduplication is done before the data is written or stored to the backup disk.This requires less storage which is an advantage but more time as the backup process can be completed only after the deduplication filtering is done. In the case of post process data the storage space requirement is higher but deployment happens much faster. These methods are chosen depending on the system, the amount of data to be handled, the storage space available for the system as well as back up, the processor capacity and the time constraints.


The greatest advantage is less storage requirements which improves bandwidth efficiency. As primary data storage have become inexpensive over the years, organizations tend to maintain backup data of a project for a longer period so that new employees can reuse certain data for future projects. These data storehouses need to have cooling process with proper maintenance and hence consumes a lot of electric power. The amount of disk or tapes which the organization needs to buy and maintain for data storage also reduces, thus reducing  the total cost for storage. Deduplication can reduce the bandwidth requirements for backup and in some cases it can also boost both backup and recovery process.

Thursday, November 21, 2013

Cloud based 3D Streaming

Cloud based services have taken a huge leap from being a young niche technology to a matured and highly optimized environment. This drastic change is becoming more and more visible in most of the technological advances of the day. This write-up will walk you through one of the latest advances in cloud based technology, related to 3D content management.


3D streaming and content rendering has always been a challenge for developers and supporting team members. To understand this complexity, we need to have an understanding about how a 3D model is generated. In simple terms, a graphical element/image is a set of pixels (smallest units of a graphical element). Pixels are tiny blocks containing a specific color and location on the screen. A large number of pixels arranged together in a specific order forms an image. Since its inception, graphics processing has transformed in a huge scale. This transaction started with the basic 2D graphic models with little to no dynamics or effects.


In this case, graphic processing was often termed as “memory-mapped models” wherein, developers used to code the location and color of each pixel in an image. This process was fairly simple and practical in the case of basic 2D models and images. As the technology grew, graphic rendering and associated image delivery platforms (graphic cards, display units, etc.) also began changing forms and sizes. This was followed by the developments in 3 dimensional image processing and rendering, wherein the concept of “Depth” was also introduced in graphics processing.

Now, 3D technology reaches its next milestone where it will be made available to users via cloud based services. This means that users will have 3D rendering and content delivery at their fingertips with the help of 3D streaming technology from Amazon. Amazon is already prepared to provide 3D streaming services to its users via EC2’s new G2 instances. The new G2 instances consist of robust and well set infrastructure that ensures a great 3D experience to its users. As detailed in their company website, the new G2 instances will have:
·         NVIDIA GRID™ technology using GK104 “Kepler”. This will be a Graphics Processing Unit that has 1,536 CUDA cores and at least 4 GB of video (frame buffer) RAM.
·         Intel Sandy Bridge processor running at 2.6 GHz with Turbo Boost enabled, 8 vCPUs (Virtual CPUs).
·         Minimum of 15 GiB of RAM.
·         Around 60 GB of Solid state disk storage.


One limitation or rather, “feature” as the company calls it, would be the requirement for a client software/product that runs on “Remote Framebuffer Protocol” or RFP. This will be required in order to detect the GPU present in the remote cloud based system. A very good and common example for such a compatible product would be “Teamviewer”, as it provides the flexibility, ease of use and compatibility with the new G2 instances. The new G2 architecture would also work with non-RFP based products, but the GPU would remain undetected in this case.

The new G2 technology using EC2 clusters presents a great window of opportunity for end users to utilize the capability of cloud based graphics rendering technology. We can surely look forward to a drastic increase in quality of graphically processed products in the near future.




Tuesday, November 12, 2013

Google Floats !!!!

The king of innovation Google, has once again come up with a surprise that will bring a smile on the face of its vast fan base, worldwide. Although initially Google denied acceptance of ownership, the technology firm has confirmed that the “Floating data center” is owned by them. Yes, you heard it right! Google now has a complete data center infrastructure floating on a barge in San Francisco bay! Is this all hype or does it really serve a purpose? Let’s have a closer look at the details collected from various sources.




Google has always been innovating with available resources in order to maximize their profitability and productivity. This instinct has taken the company from a mere “search engine” provider to a global technological phenomenon. This instinct has lately been seen during the alpha-level launch of Google Glass technology. Google’s floating data center that stood about five stories tall, appeared on San Francisco bay last week. The structure was massive and attracted the attention of a huge crowd. Although the structure was visible to the public, access was strictly restricted and security was at maximum. The initial response from the public was that the structure was part of some “secret project”. This false notion was quickly corrected as the Coast Guard confirmed that the barge is owned by Google.




Looking back at Google’s level of innovation, the company has always preferred thinking outside the box. This trend was clearly displayed in Google’s plans to use “weather balloon” like structures to bring high speed Internet to cities and towns. On close analysis, we can understand that Google has one more core driving force behind each of their revolutionary innovations and that is “cost cutting”! This has been the same motivation behind the development of the floating barge. Here are a few facts about the floating barge:

·         Floating barges minimize the hurdles of renting or purchasing real estate in the suburbs or downtown. The cost incurred would be limited to setting up a concrete barge and modern day containers in which the data center infrastructure is stored.
·         Google plans to maximize the use of their virtualization technology to manage and monitor the data center. As the infrastructure is smaller in size, management overheads will be minimized too.
·         The barge is expected to reduce “cooling costs” by utilizing alternate cooling mechanisms, which are cheaper. Although shipping containers are going to be more costly than conventional data center buildings, Google is expected to figure out the best solution for this concern.
·         Google data centers are currently running at an 85% utilization rate which is expected to be shared by such upcoming projects.
·         Such “unconventional” mechanisms are also aimed at fuelling the cloud service (SaaS) phenomenon in a big way.
·         Updation of hardware and technology is easier as a new shipping container/barge can be released replacing the obsolete one.

Although there are numerous speculations and assumptions as to what the mammoth structure is, there is still no confirmation about the actual intentions and plans of Google. There are reports from KPIX5 channel that the barge is a “floating store” for Google Glass. According to the news, Google will be completing the structure at Treasure Island and then shipping the structure to Fort Mason. KPIX5 mentioned that the structure will be anchored at Fort Mason where it would be opened for public. So, all that we spectators can do is be patient and wait for the suspense to be revealed by Google…

Tuesday, November 5, 2013

Biohacker Implants Smartphone-Sized Sensor Into His Arm


The above image is a photo of biohacker Tim Cannon, the so-called “DIY Cyborg” who implanted a Circadia 1.0 computer chip the size of a smartphone under the skin of his forearm.
The wirelessly charged sensor, developed over the course of 18 months by Cannon and his fellow hackers/artists at Grindhouse Wetware, monitors his vital signs, then transmits that real-time data via Bluetooth to his Android device.
Cannon told Vice’s Motherboard that Circadia 1.0 could “send me a text message if it thinks that I’m getting a fever.” The device could then help determine what factors are causing the fever. Future versions of the sensor are expected to monitor the pulse and — thankfully — come in a smaller, less ghastly package.
As if the bulging device, bruised skin and crude stitches weren’t an obvious giveaway, the procedure was not medically approved, so Cannon recruited some body modification pioneers to perform the surgery. Not only that, he did it “raw dog,” without anesthesia.
“I think that our environment should listen more accurately and more intuitively to what’s happening in our body,” Cannon said. “So if, for example, I’ve had a stressful day, the Circadia will communicate that to my house and will prepare a nice relaxing atmosphere for when I get home: dim the lights, let in a hot bath.”
Cannon expects the first production series of the chip to be ready in a few months and said it will cost around $500. But since the implant procedure will certainly still be medically unapproved, interested hackers will have to seek out the body modification community to have it done. Steve Haworth, the body modification expert who conducted Cannon’s surgery, said he would charge around $200 for the procedure.