Thursday, December 26, 2013

Amazon RDS Supports PostgreSQL!

Amazon cloud based services have officially announced that they will be upgrading their support to include PostGreSQL in their list of supported database architecture. This will provide users with the flexibility of cloud services while working with systems that utilize PostgreSQL. Such increased flexibility enables users to easily create, manage, scale and optimize their PostGre setups.

Looking back at the traditional process involved, PostGreSQL database creation and management used to be a complex and cumbersome process that involved a lot of effort and technical know-how. Amazon aims to simplify this process by utilizing its SaaS, PaaS and IaaS capabilities. With a powerful and efficient management console provided by Amazon, users will now be able to easily create and manage PostGre instances. The console provides autoconfigured instances with prefilled parameters that help users to set up instances with just a few mouse clicks.

The main advantage of Amazon RDS is that the user will be able to use all the tools and services that are currently available with database engines. PostGre has become the leading trend in web and mobile application development and this drift will increase with support from Amazon.

The current integration from Amazon includes a few key functional and feature specifications which makes it truly user friendly and efficient. Specifications are explained below:

* Amazon RDS provides the option to select preconfigured parameters and settings. This allows quick creation, deployment and management of PostGreSQL instances.

* Analytics feature has been optimized to allow real time monitoring and tracking of usage statistics and information.

* Another amazing feature is the automated status/activity update. This feature updates the user by sending an SMS or email notification to the registered email address/mobile number.

* Version control and patch management are handled by the system. This process is automated and allows the system to patch the database and maintain proper versioning.

*  Amazon has optimized speed and performance figures. This has been done by modifying the IOPS range and provisioning it to more than 25,000 PostGreSQL.

*  Scaling of database is a piece of cake in Amazon RDS. As always, easy scalability is an advantage of PostGreSQL instances in Amazon.

*  Apart from this, Amazon also provides enhanced security and backup/restoration mechanisms.

Main Features:

Location-based services – Amazon supports location-based geolocation services with PostGre. This is termed as PostGIS service and extends the relational schema to a geospatial data architecture.

Multiple language support – The support for multiple code level languages has been included. There are three language extensions to support Perl, pgSQL and Tcl.

Enhanced search – Amazon ensures a perfected search mechanism that allows users to perform text-based searches.

JSON support – The system supports JSON communication for data manipulation and Hstore function.

Apart from these, the system includes all core features of PostGreSQL database.


Monday, December 16, 2013

Apple’s iBeacon

When Apple very recently launched iOS 7 and followed this up with another set of phones, many speculated an NFC-enabled iPhone and/or iPad. However, not only did they make the decision to keep away from NFC, they also opted to go with another technology that they believe can become a potential game changer, and it is called iBeacon.

What is iBeacon?
iBeacon is a novel technology that enables mobile apps to detect when an iPhone is near a small wireless sensor called a beacon.  “Beacons” are  small wireless sensors that makes use of  Bluetooth Low Energy (BLE) to  transmit data to an iPhone and vice versa.



BLE Technology

iBeacon makes use of Bluetooth low energy (also known  as Bluetooth Smart) which is a wireless computer network technology for transmitting data and is characterized by reduced power consumption and lower cost. It consumes minuscule amounts of energy and so, there is minimal battery impact of the device. Estimote Inc, a supplementer of beacons, boasts of one which can run for up to two years on a single coin battery. Imagine that!!

How does it work?

Imagine you are walking by a store that has a beacon installed, with an iPhone in your hand. This “beacon” can be thought of a context-aware miniature sensor which is capable of spotting your location. If your iPhone has an app associated with the store installed on it, once you are within the beacon’s range, it can transmit specific data (including advertisements, promotions) to your iPhone through this app.

Further, this could enable payments at the point of sale (POS) so that you don’t need to remove your wallet or card to make a payment. With this feature, iBeacon could become a possible NFC competitor.

Pros:

·         Compatibility – Most recent phones are compatible with BLE (including iPhone).
·         Range –  iBeacons range is up to 50 meters which is significantly higher compared to NFC.
·         Low power consumption – The most attractive feature so far is that, even with constant use, there is minimal battery impact
·         Works indoors (unlike GPS)

Cons:

·         Expensive – iBeacons are slightly more expensive than NFC chips.
·         Security is a concern.

Compatible devices

The following set of hardware are considered iBeacon compatible:
·         iOS devices with Bluetooth 4.0 (iPhone 4S and later, iPad 3 and later, iPod touch 5, iPad mini and later)
·         Android devices with Bluetooth 4.0 and Android 4.3 and later (Samsung Galaxy S3/S4, Samsung Galaxy Note II/III, HTC One, Nexus 7 2013 edition, Nexus 4, Nexus 5,HTC Butterfly, Droid DNA)
·         Macintosh computers equipped with OS X 10.9 (Mavericks) and Bluetooth 4.0 using the MacBeacon application from Radius Networks

The claim that iBeacon could be a potential NFC killer is supported by the fact that it is already being adopted worldwide by different e-commerce business giants . PayPal, one of the  e-commerce business giants is already working on this to enable payments wirelessly. On December 6, 2013, Apple made a huge statement by deploying iBeacons across its 254 US retail stores. Although iBeacon technology primarily points out to Apple’s dislike towards data-sharing capabilities of near field communication (NFC) technology, is a great product idea and could revolutionize contact-less payment systems in the near future.


Wednesday, December 11, 2013

7 Google Products in 2013

Google celebrated its 15th anniversary this year !!!

1.Google Glass
In 2013, for the first time, the mantle of innovation was passed from Apple to Google. This was largely possible because of Google Glass, the first optically-driven wearable computer aimed at general consumers.
Glass isn't set to hit the mass market until 2014, but the company began disseminating units to an unspecified number of "explorers" early this year. That led to a wave of publicity, then a backlash and now, as the year comes to a close, continued uncertainty about the viability of the category.
Google Glass's primary issue is a chicken-and-egg problem: Since so few people own a pair, they are not yet socially acceptable. A major part of the Glass experience is receiving weird looks from others. Google could work out some of those kinks; a partnership with fashionable eyewear brand Warby Parker, for example, would likely yield some less geeky designs. Even if Glass is a colossal flop, though, the product has succeeded it making Apple look comparatively timid with its lineup of mildly tweaked phones (think: the iPhone 5C and 5S) and tablets and rumored iWatch.
2.Android Takes 81% of the Market
Though many consumers continue to equate the smartphone with the iPhone,four-fifths of all smartphones actually sport Android. That means that the smartphone market is shaping up to look a lot like the PC segment, with Google playing the role of Microsoft. Butwhile Windows is a cash cow for Microsoft, Google doesn't actually make any money from Android.Instead, Android is designed to sell advertising. With such a large chunk of the market, Google has brilliantly transitioned from the desktop era to the mobile age.
Of course, that's not how Apple sees it. In a September interview with Bloomberg Businessweek, Apple CEO Tim Cook cited stats showing that despite Android's market-share dominance, 55% of mobile traffic comes from iOS devices. Cook also indirectly dubbed many of Android-based devices as "junk." Again, this is a moot point when your intention is to sell ads.
3.Google Buys Waze for $1.1 Billion
The world earned new respect for Google Maps in 2012 after Apple's disastrous introduction of Maps. But the mapping category keeps evolving. Waze, an Israeli company, was ahead of the curve with incorporating real-time information, like traffic, into maps. Many realized Waze was one of very few companies to offer such data along with its own credible mapping infrastructure, which set off a bidding war that reportedly included Apple and Facebook. In the end, Google and, as a result, maintained its reputation as the Internet's premiere cartographer.
4.Moto X Launches
In August, Google announced the Moto X, the first smartphone designed together by Google and its Motorola unit. (This was also the first major release to follow the company's $12 billion purchase of Motorola.) Though the Moto X received positive reviews, it lacked any strong differentiator. Even its Touchless Control, which brings your phone to life by uttering "OK Google now," appeared in a new line of Verizon Droid smartphones the week before the Moto X was unveiled.
In other words, this wasn't seen as a breakthrough — a tough challenge with so many other Android manufacturers. Moto, which was seen as a hedge against patent trolls, has not yet seamlessly integrated into the company and appears to be just one of many partners, albeit one that Google owns.
5.The Introduction of Chromecast
In August, Google announced the Moto X, the first smartphone designed together by Google and its Motorola unit. (This was also the first major release to follow the company's $12 billion purchase of Motorola.) Though the Moto X received positive reviews, it lacked any strong differentiator. Even its Touchless Control, which brings your phone to life by uttering "OK Google now," appeared in a new line of Verizon Droid smartphones the week before the Moto X was unveiled.
In other words, this wasn't seen as a breakthrough — a tough challenge with so many other Android manufacturers. Moto, which was seen as a hedge against patent trolls, has not yet seamlessly integrated into the company and appears to be just one of many partners, albeit one that Google owns.

6.Google Stock Hits $1,000
In October, Google stock entered the four-figure range, joining an elite club including Priceline, Seaboard and Berkshire Hathaway. Such psychological barriers are often meaningless — a looming stock split will send it back to three digit territory soon — but it underscored the company's stellar financial performance this year.
7.Google Play Passes 50 Billion App Downloads
This summer, just as Apple announced 50 billion downloads on the App Store, Google was also crowing about the same number of downloads. Google Play launched as Android Market just a few months after the App Store in 2008. However, for much of the ensuing period, Google Play was seen as an also-ran next to the App Store. This is partially because developers generally release iOS versions of their apps ahead of their Android iterations — in fact, often months ahead.
That may be changing. With more than 80% of the global market, justifying an iOS-first strategy is increasingly difficult. If Android does get the edge in new development, it will be harder for consumers to defend their devotion to iOS as well.

Friday, December 6, 2013

Next Generation USB Connector

As mobile devices get increasingly slimmer, so too will their corresponding USB connectors. Even better, you won't have to flip the cable when you try to slip it in upside down. Finally Development for the next-generation USB connector, called the Type-C, is underway and will be thinner and sleeker than current USB 3.0 cables (pictured above), according to the USB 3.1 Promoter Group, which is made up of industry heavy hitters including Microsoft, Hewlett-Packard and Intel.

To pack the powerful punch of the USB 3.1 standard, which can move data at 10 gigabits per second, into a smaller cable, it will closely resemble the USB 2.0 Micro-B. But it has a few advantages over existing models: Specifically, it's reversible, meaning users no longer need to worry about plug orientation. 
The plug design is similar to the Apple's Lightning cables and will take away one of USB's main frustrations. The downside is that the new cables won't work with existing connectors.
The Type-C connector is built on existing USB 3.1 and USB 2.0 technologies and will have scalable power capabilities, meaning it will be able to charge a wide range of gadgets.
“While USB technology is well established as the favored choice for connecting and powering devices, we recognize the need to develop a new connector to meet evolving design trends in terms of size and usability,” said Brad Saunders, USB 3.0 Promoter Group Chairman, in a statement. “The new Type-C connector will fit well with the market’s direction and affords an opportunity to lay a foundation for future versions of USB.”

Monday, December 2, 2013

Google to Launch the New Nexus 10

There have been precious few details on Google’s second-generation Nexus 10, but new reports are now suggesting that the 10-inch tablet could go on sale as early as next week.
A product page for the device was temporarily listed on the Google Play store earlier this week and the page indicated that the tablet would go on sale on November 21. Perhaps unsurprisingly, Google has since deleted the page from its website.
The story was first spotted by Ubergizmo, although the tech website has since removed its article on the issue.
Little is known about the second-gen Nexus 10 at this time -- although a recent leak claimed that the device comes with the same 2560 x 1600 resolution display, a quad-core Qualcomm Snapdragon 800 CPU and 3GB of RAM. It is also expected to launch with Android 4.4 Kit Kat, the latest iteration of Android which is now available for most other Nexus tablets).

Tuesday, November 26, 2013

Data Deduplication – A Perfect Tool for Database Management

Efficient management and storage of data is usually a problem that most organizations face these days. There are various methods and technologies which are in place to solve this issue. The amount of space available for storage must be used efficiently so as to store maximum data in minimum space. Data Deduplication is a method which looks for repetition or redundancy in sequences of bytes over a large collection of data. The first uniquely stored version of a data sequence is referenced at further points than be stored again. Data deduplication is also known as intelligent compression or single-instance storage method.

File Level Deduplication

In its most common form, deduplication is done at the file level. It means that no file which is identical is stored again and this is done by filtering the incoming data and processing it so as to avoid repeated storage of the same file unnecessarily. This level of deduplication is known as single-instance storage (SIS) method. Another level of deduplication occurs at block level, where blocks of data that are similar in two non identical files, are identified and only one copy of the block is stored. This method frees up more space than the former, as it analyzes and compares data at a deeper level.



Target Level Deduplication

The second type of implementation is at the target level which is the backup system.The deployment is easier compared to the first source type. There are two types of implementation – inline or post process. In inline implementation the deduplication is done before the data is written or stored to the backup disk.This requires less storage which is an advantage but more time as the backup process can be completed only after the deduplication filtering is done. In the case of post process data the storage space requirement is higher but deployment happens much faster. These methods are chosen depending on the system, the amount of data to be handled, the storage space available for the system as well as back up, the processor capacity and the time constraints.


The greatest advantage is less storage requirements which improves bandwidth efficiency. As primary data storage have become inexpensive over the years, organizations tend to maintain backup data of a project for a longer period so that new employees can reuse certain data for future projects. These data storehouses need to have cooling process with proper maintenance and hence consumes a lot of electric power. The amount of disk or tapes which the organization needs to buy and maintain for data storage also reduces, thus reducing  the total cost for storage. Deduplication can reduce the bandwidth requirements for backup and in some cases it can also boost both backup and recovery process.

Thursday, November 21, 2013

Cloud based 3D Streaming

Cloud based services have taken a huge leap from being a young niche technology to a matured and highly optimized environment. This drastic change is becoming more and more visible in most of the technological advances of the day. This write-up will walk you through one of the latest advances in cloud based technology, related to 3D content management.


3D streaming and content rendering has always been a challenge for developers and supporting team members. To understand this complexity, we need to have an understanding about how a 3D model is generated. In simple terms, a graphical element/image is a set of pixels (smallest units of a graphical element). Pixels are tiny blocks containing a specific color and location on the screen. A large number of pixels arranged together in a specific order forms an image. Since its inception, graphics processing has transformed in a huge scale. This transaction started with the basic 2D graphic models with little to no dynamics or effects.


In this case, graphic processing was often termed as “memory-mapped models” wherein, developers used to code the location and color of each pixel in an image. This process was fairly simple and practical in the case of basic 2D models and images. As the technology grew, graphic rendering and associated image delivery platforms (graphic cards, display units, etc.) also began changing forms and sizes. This was followed by the developments in 3 dimensional image processing and rendering, wherein the concept of “Depth” was also introduced in graphics processing.

Now, 3D technology reaches its next milestone where it will be made available to users via cloud based services. This means that users will have 3D rendering and content delivery at their fingertips with the help of 3D streaming technology from Amazon. Amazon is already prepared to provide 3D streaming services to its users via EC2’s new G2 instances. The new G2 instances consist of robust and well set infrastructure that ensures a great 3D experience to its users. As detailed in their company website, the new G2 instances will have:
·         NVIDIA GRID™ technology using GK104 “Kepler”. This will be a Graphics Processing Unit that has 1,536 CUDA cores and at least 4 GB of video (frame buffer) RAM.
·         Intel Sandy Bridge processor running at 2.6 GHz with Turbo Boost enabled, 8 vCPUs (Virtual CPUs).
·         Minimum of 15 GiB of RAM.
·         Around 60 GB of Solid state disk storage.


One limitation or rather, “feature” as the company calls it, would be the requirement for a client software/product that runs on “Remote Framebuffer Protocol” or RFP. This will be required in order to detect the GPU present in the remote cloud based system. A very good and common example for such a compatible product would be “Teamviewer”, as it provides the flexibility, ease of use and compatibility with the new G2 instances. The new G2 architecture would also work with non-RFP based products, but the GPU would remain undetected in this case.

The new G2 technology using EC2 clusters presents a great window of opportunity for end users to utilize the capability of cloud based graphics rendering technology. We can surely look forward to a drastic increase in quality of graphically processed products in the near future.




Tuesday, November 12, 2013

Google Floats !!!!

The king of innovation Google, has once again come up with a surprise that will bring a smile on the face of its vast fan base, worldwide. Although initially Google denied acceptance of ownership, the technology firm has confirmed that the “Floating data center” is owned by them. Yes, you heard it right! Google now has a complete data center infrastructure floating on a barge in San Francisco bay! Is this all hype or does it really serve a purpose? Let’s have a closer look at the details collected from various sources.




Google has always been innovating with available resources in order to maximize their profitability and productivity. This instinct has taken the company from a mere “search engine” provider to a global technological phenomenon. This instinct has lately been seen during the alpha-level launch of Google Glass technology. Google’s floating data center that stood about five stories tall, appeared on San Francisco bay last week. The structure was massive and attracted the attention of a huge crowd. Although the structure was visible to the public, access was strictly restricted and security was at maximum. The initial response from the public was that the structure was part of some “secret project”. This false notion was quickly corrected as the Coast Guard confirmed that the barge is owned by Google.




Looking back at Google’s level of innovation, the company has always preferred thinking outside the box. This trend was clearly displayed in Google’s plans to use “weather balloon” like structures to bring high speed Internet to cities and towns. On close analysis, we can understand that Google has one more core driving force behind each of their revolutionary innovations and that is “cost cutting”! This has been the same motivation behind the development of the floating barge. Here are a few facts about the floating barge:

·         Floating barges minimize the hurdles of renting or purchasing real estate in the suburbs or downtown. The cost incurred would be limited to setting up a concrete barge and modern day containers in which the data center infrastructure is stored.
·         Google plans to maximize the use of their virtualization technology to manage and monitor the data center. As the infrastructure is smaller in size, management overheads will be minimized too.
·         The barge is expected to reduce “cooling costs” by utilizing alternate cooling mechanisms, which are cheaper. Although shipping containers are going to be more costly than conventional data center buildings, Google is expected to figure out the best solution for this concern.
·         Google data centers are currently running at an 85% utilization rate which is expected to be shared by such upcoming projects.
·         Such “unconventional” mechanisms are also aimed at fuelling the cloud service (SaaS) phenomenon in a big way.
·         Updation of hardware and technology is easier as a new shipping container/barge can be released replacing the obsolete one.

Although there are numerous speculations and assumptions as to what the mammoth structure is, there is still no confirmation about the actual intentions and plans of Google. There are reports from KPIX5 channel that the barge is a “floating store” for Google Glass. According to the news, Google will be completing the structure at Treasure Island and then shipping the structure to Fort Mason. KPIX5 mentioned that the structure will be anchored at Fort Mason where it would be opened for public. So, all that we spectators can do is be patient and wait for the suspense to be revealed by Google…

Tuesday, November 5, 2013

Biohacker Implants Smartphone-Sized Sensor Into His Arm


The above image is a photo of biohacker Tim Cannon, the so-called “DIY Cyborg” who implanted a Circadia 1.0 computer chip the size of a smartphone under the skin of his forearm.
The wirelessly charged sensor, developed over the course of 18 months by Cannon and his fellow hackers/artists at Grindhouse Wetware, monitors his vital signs, then transmits that real-time data via Bluetooth to his Android device.
Cannon told Vice’s Motherboard that Circadia 1.0 could “send me a text message if it thinks that I’m getting a fever.” The device could then help determine what factors are causing the fever. Future versions of the sensor are expected to monitor the pulse and — thankfully — come in a smaller, less ghastly package.
As if the bulging device, bruised skin and crude stitches weren’t an obvious giveaway, the procedure was not medically approved, so Cannon recruited some body modification pioneers to perform the surgery. Not only that, he did it “raw dog,” without anesthesia.
“I think that our environment should listen more accurately and more intuitively to what’s happening in our body,” Cannon said. “So if, for example, I’ve had a stressful day, the Circadia will communicate that to my house and will prepare a nice relaxing atmosphere for when I get home: dim the lights, let in a hot bath.”
Cannon expects the first production series of the chip to be ready in a few months and said it will cost around $500. But since the implant procedure will certainly still be medically unapproved, interested hackers will have to seek out the body modification community to have it done. Steve Haworth, the body modification expert who conducted Cannon’s surgery, said he would charge around $200 for the procedure.


Thursday, October 31, 2013

Android KitKat on the Nexus 5

Google launched its latest sweet-themed Androidoperating system just in time for Halloween: KitKat.
Right now, it's only available on the Nexus 5, which just went on sale in the Google Play store. Other Nexus device users will get it sometime in November, while other Android owners will get it whenever their carrier decides to upgrade them.

Google holds out the hope that KitKat is the one Android version that will rule them all; it has been designed for the slowest of smartphones as well as speed machines like the Nexus 5. So for all those users still struggling on Android 2.3 Gingerbread (that would be roughly a third of all Android users), the KitKat upgrade is a huge deal.
The interface is full of incremental improvements. Scrolling is faster; fonts look sharper. Emoji icons have been added to the keyboard. There are no more widgets to worry about. A translucent search bar sits at the top of each of your pages of apps. Fire up a game or open an e-book, and the UI goes away altogether.

Thursday, October 24, 2013

Ubuntu Saucy Salamander is Here

The latest version of Ubuntu was released on October 17, 2013. This is version 3.10 and is also called as Saucy Salamander, which includes some major enhancements when compared to previous version. What’s fascinating about the launch is the inclusion of a stable version of Ubuntu Touch, an operating system developed by Ubuntu, which can be installed on any smartphone or tablet.



According to Community Manager for Ubuntu, Jono Bacon, the desktop version has many new options. Despite this, the stable version of Ubuntu Touch, which is created ​​primarily for manufacturers and developers, brings only basic applications like a browser, clock, climate indicator and a calculator, all created by members of Ubuntu community.

What’s new in Ubuntu 13.10?

Saucy Salamander brings some new options that are worth trying. Smart Scopes is one among these options, that delivers search results for any term in the Dash Search. This feature is also present in the previous version of Ubuntu, although only Amazon delivered results. Now, results are extended for many other services such as Reddit, Wikipedia, eBay, Foursquare or Grooveshark.


There are some changes in user-interface too, although these are minor and nothing unusual. Ubuntu 13.10 brings eighteen new wallpapers and updated versions of Files, Rhythmbox, and LibreOffice.

Ubuntu Touch OS for tablets and smartphones

You can install the Ubuntu Touch into a Galaxy Nexus or LG Nexus 4 following these directions, but be cautious of a few things: You must be very careful during the installation process, or your smartphone can be damaged. It does not have all the features that are provided in a retail phone and eventually this process will erase all data on the device which cannot be recovered by restoring Android.

Touch Ubuntu is stable but not fully complete ye, but with some patience, you can wait to try it a little later. The other devices that support Touch Ubuntu are Samsung Nexus 10 and ASUS Nexus 7. Currently, this OS is restricted to only these devices.

Although Ubuntu Touch is an inexperienced version, the interface allows you to completely handle gestures such as drag from the left, open an application launcher that allows changes anchored between the two applications (multi-tasking ), drag from the right shows a circle with applications that are open, or drag down from the top will show indicators and settings.