Cloud based services
have taken a huge leap from being a young niche technology to a matured and
highly optimized environment. This drastic change is becoming more and more
visible in most of the technological advances of the day. This write-up will
walk you through one of the latest advances in cloud based technology, related
to 3D content management.
3D streaming and content
rendering has always been a challenge for developers and supporting team
members. To understand this complexity, we need to have an understanding about
how a 3D model is generated. In simple terms, a graphical element/image is a
set of pixels (smallest units of a graphical element). Pixels are tiny blocks
containing a specific color and location on the screen. A large number of pixels
arranged together in a specific order forms an image. Since its inception,
graphics processing has transformed in a huge scale. This transaction started
with the basic 2D graphic models with little to no dynamics or effects.
In this case, graphic processing was often termed as
“memory-mapped models” wherein, developers used to code the location and color
of each pixel in an image. This process was fairly simple and practical in the
case of basic 2D models and images. As the technology grew, graphic rendering
and associated image delivery platforms (graphic cards, display units, etc.)
also began changing forms and sizes. This was followed by the developments in 3
dimensional image processing and rendering, wherein the concept of “Depth” was
also introduced in graphics processing.
Now, 3D technology reaches its next milestone where it will be
made available to users via cloud based services. This means that users will
have 3D rendering and content delivery at their fingertips with the help of 3D
streaming technology from Amazon. Amazon is already prepared to provide 3D
streaming services to its users via EC2’s new G2 instances. The new G2
instances consist of robust and well set infrastructure that ensures a great 3D
experience to its users. As detailed in their company website, the new G2
instances will have:
·
NVIDIA GRID™ technology using GK104 “Kepler”. This will be a
Graphics Processing Unit that has 1,536 CUDA cores and at least 4 GB of video
(frame buffer) RAM.
·
Intel Sandy Bridge processor running at 2.6 GHz with Turbo Boost enabled, 8 vCPUs (Virtual CPUs).
·
Minimum of 15 GiB of RAM.
·
Around 60 GB of Solid state disk storage.
One limitation or rather, “feature” as the company calls it,
would be the requirement for a client software/product that runs on “Remote
Framebuffer Protocol” or RFP. This will be required in order to detect the GPU
present in the remote cloud based system. A very good and common example for
such a compatible product would be “Teamviewer”, as it provides the
flexibility, ease of use and compatibility with the new G2 instances. The new
G2 architecture would also work with non-RFP based products, but the GPU would
remain undetected in this case.
The new G2 technology using EC2 clusters presents a great window
of opportunity for end users to utilize the capability of cloud based graphics
rendering technology. We can surely look forward to a drastic increase in
quality of graphically processed products in the near future.
Very excellent publish. I just came upon your blog page web page and desired to say that I’ve truly encountered shopping your weblog web page material.
ReplyDeleteResponsive Web Design Company in Bangalore | Responsive Website Design in Bangalore