Software is eating the world and it’s a no brainer that GE decided to take the plunge into software development.Late in 2011 GE announced that it would invest $1B in software over three years as it seeks to use software to make its products more profitable. One could argue that this was a late move for GE, after all IBM recognized the power and importance of software in the nineties and over the past three decades has invested heavily in software and services while divesting some of its hardware assets.
On the heels of its software initiative, GE launched its Industrial Internet initiative, which is the convergence of the global industrial system with advanced computing, analytics, low-cost sensing and connectivity via the Internet. To make this a successful initiative GE needs key enabling technologies such as cloud, big data and analytics to come together so that it can make sense of vast amount of data that will be created by the machines. And on that note, earlier this week GEannounced that it would invest $105 million for a 10% stake in EMC and VMware’s newest venture – Pivotal.
Pivotal inherited assets and people from VMware and EMC with a mission to build a new platform for a new era. The platform will consist of Cloud Foundry for its Cloud Fabric, Spring and vFabric for its Application Fabric and Pivotal HD, GemFire and GPDB for its Data Fabric. While individually these assets may be successful as Pivotal is expected to bring in $300 million in revenue by the end of 2013, over all integration remains a key challenge and will take a long time for Pivotal to address. Still GE has faith in the management’s ability to execute.
Will Pivotal enable GE to compete against other companies in the IoT space? Will the Pivotal One Platform enable GE to build the services it needs for its Industrial Internet initiative? Netflix didn’t choose CloudFoundry and other Pivotal assets as they didn’t meet its needs and instead built its own platform. But unfortunately for GE, IBM has a head start against GE having launched the smarter planet initiative in 2008 and showing 25% YoY growth in the most recent quarter. So rather than building its own platform, GE made the right move to focus on its core competencies and partner with someone who can build the platform for it. It certainly used a rather unorthodox model of partnership by taking a sizable investment stake in Pivotal vs. simply buying the product, thus having a strong ability to influence the product direction. Perhaps GE’s investment will pay off…
This post was originally posted on cloudfieldnotes, a Rishidot site
At its annual Directions conference on March 5th, IDC talked about the shift that the ICT industry is making to the 3rd platform which is the ecosystem driven by mobile, social, cloud and big data. The shift is occurring from the Lan/Internet, Client Server and PC era, the 2nd platform, which was preceded by the Mainframe Terminal era, the 1st platform. While the shift itself is not surprising, the interesting fact is that the 3rd platform is where 90% of growth opportunity is over the next 7 years. But buyers are looking for solutions vs pure technology and so it becomes important for vendors to expand value they bring to the table from silos to mashups. Its not just about cloud or mobile or big data or social anymore. It’s about a combination of these into something meaningful that improves the customer experience. To build the right experience, vendors should try to design their products for consumer and then enhance them for the enterprise vs enterprise first/only policy and remember that mobility goes beyond smartphones and tablets as our lives are being proliferated by connected everyday devices ranging from cameras to cars to toothbrushes.
IDC believes that applications for the 3rd platform are going to be developed on and will live in PaaS solutions. This view however does not match what Forrester’s Q3 2012 Global Cloud Developer Survey found, which is that 71% of Cloud Developers use IaaS and specifically Amazon to deliver applications. IDC does have an interesting view, one I tend to agree with. The next generation PaaS solutions will be Industry focused vertical platforms such as FinQCloud , Euronext, BaseSpace, Panoptix. These next generation vertical platforms will become powerhouses of valuable data and CIOs are going to gravitate towards this data. Thus, it becomes increasingly important for vendors to understand Data Gravity, a concept first described by Dave McCrory.
The 3rd Platform is powerful as it enables a new buyer of IT products, the LOB executive. This LOB executive is no longer dependent on internal IT to deliver what is needed. He/She is looking for offerings that can be easily consumed via a subscription model. Ultimately IDC recommends that we should prepare for the death of dedicated IT and embrace shared models to be successful. But what traditional IT organizations really need to do is to evolve to service brokers in order to support these new decision makers and worry less about in house vs off- premise as business needs will determine where IT lives.
At Netflix, even developer meetups have a movie like experience … The first Netflix OSS meetup was held on Wednesday at the Netflix HQ in their theatre room with popcorn, sodas! The energy in the room was fantastic as people came in to learn more about the Netflix projects that it has open sourced starting with the Curator project in 2011 and its 2013 plans.
I must say that Netflix is among the few companies that have the right idea, that its platform is the business enabler while its content is its competitive advantage. By open sourcing its platform and inviting others to contribute, it is able to focus its big investment dollars on Netflix unique services and at the same time leverage industry experience and best practices to keep moving the platform forward. In fact, Netflix has done such a great job with developer relations that developers keep contributing to the open source projects even after developers leave the company. While there weren’t many in the room that were contributing to the projects, Netflix is hoping that the picture changes over time.
Such is the interest in Netflix tools that within 24 hours of announcing the next meet up, over 150 people have already signed up. However not many people are going to be able to use Netflix OSS to develop cloud applications end to end. As it stands today, Netflix OSS is a collection of components. Companies would require engineering talent to put the pieces together and not many do. Even if they could put the pieces together the Achilles heel for Netflix platform is its reliance on Amazon. It has been burnt by Amazon, with its most recent Christmas Eve outage being very visible.
Netflix clearly understands these issues. It wants to make its platform easy to adopt and work towards building a platform ecosystem. It also wants to eliminate AWS as its single point of failure and add portability and availability to the platform. Its 2013 roadmap highlights build and deploy, recipes (sample applications), availability, analytics and persistence as key categories. The Netflix OSS overview and roadmap can be found here and the lightning talks about the various projects can be found here. It will be interesting to see how the platform evolves and whether scalable, feature rich public cloud alternatives to AWS emerge to make the platform truly portable.
A typical cost discussion regarding internal data center versus cloud provider costs generally entails the use of the phrase “Capex vs Opex” where the argument in favor of cloud computing is that cloud computing promises to transform the cyclical capital investment that IT makes into smoother year-round operational expense. However, the truth of the matter is that IT budgets remain static while cloud computing costs are variable. Not only can monthly cloud usage vary but also can on-demand prices change throughout the year. The uncertainty often rise to the tendency to either resist adopting cloud or over purchasing capacity in the cloud.
But what if with the help of a cloud financial brokerage service, CFOs could fix their IT costs in the same way that they can currently fix power, oil and other commodity spending by trading options? IT departments could do proactive risk management in the same way that say, Southwest Airlines uses fuel hedging to insulate against file price fluctuation. Cloud brokers could offer a very valuable service to CFOs – predictability in their cloud bill and at the same time help them reduce costs. Capacity planning and analytics tool could be leveraged perform historical IT capacity usage analysis and combine it with forecasted growth to produce an estimate of monthly cloud usage. IT departments could then purchase only what they need, with the option to buy more at a fixed price or sell their excess capacity at a fixed price. This would eliminate the need to over purchase capacity as is traditionally done with capital IT purchases.
One way to achieve cost savings and predictability is to purchase reserved instances in the cloud. For e.g. Amazon reserved instances can be cost advantageous at utilizations as low as 17% over a 3 year term or 32% over a 1 year term. With a sufficiently large customer base, a Cloud financial brokerage service could match a customer’s cloud usage with gaps in other customer cloud usage. This would allow it to purchase cheaper reserved cloud instances in advance and facilitate time sharing. Customers could be ensured of a fixed on-demand price for as long as 3 years because the underlying cost of instances for the broker is fixed. Additionally some of the savings from purchasing reserved instances, could then be passed on to a customer purchasing on-demand instances through the broker.
I spoke to Sarah Cochrane, SVP Strategic Alliances at Strategic Blue, late last year about the company which is one such cloud broker dealer. Strategic Blue was founded by James Mitchell, who was a structured power trader at Morgan Stanley. He founded Strategic Blue when he realized that there were parallels between electricity and infrastructure as a service. While trading IT as a commodity is their vision, currently they offer financial services to end users that gives them payment flexibility and may lower their cloud bill. Strategic Blue steps into the billing chain between the user and the cloud provider and can alter the terms under which the user pays for cloud services. For e.g. Strategic Blue can assist customers by billing customers monthly for the usage of reserved instances and offer them the flexibility to pay via credit card or by invoice and on a day of the month that suits the their needs. They can aggregate usage across multiple clouds and send a single bill to their customers. They also pass a portion of the saving from purchasing reserved instances through them to their customers.
There is a cost advantage to billing cloud services through a cloud broker such as Strategic Blue, especially if the broker can guarantee that purchasing through them will never be more expensive than purchasing directly from the cloud provider on equivalent terms. Cost predictability will go a long way towards leveraging cloud services on-demand with rapid elasticity.
On Sept 28th, Amazon announced the new Kindle Fire tablet. This post digs into the two key cloud integrations that Amazon brings with Fire.
Amazon Silk, The Cloud Accelerated Browser
Free cloud storage for all Amazon content
Amazon has designed the tablet with only 8 GB of storage space. But it allows users to leverage its cloud storage solution to save content (books, music, video, apps) that they are not frequently using. Content is available instantly to stream or download for free and Amazon uses its Whispersync technology to sync content across devices
By leveraging its massive cloud infrastructure, Amazon can deliver a tablet with skimmed down hardware and low price point. Although free cloud storage and streaming is available for those that go beyond 8 GB, this privilege is restricted to Amazon content. If users have invested in content from other providers and plan to use Fire as their primary device for content, they will either need to make sure that the non Amazon content fits on the 8GB or pay for additional storage on the Amazon cloud.
Whats next? Beyond Browsing and Storage
Amazon could use its cloud computing infrastructure to not just improve a user’s browsing experience or give away free storage, but also offload processing of resource intensive applications such as security scanning, number crunching, gaming etc. MIT’s Technology Preview explores some of the options. It will be interesting to see how Amazon further evolves this marriage between AWS and kindle.
This post was originally posted on cloudave.com
When considering adopting public cloud many a times companies are concerned about security, privacy and regulatory requirements. To address these concerns, at Dreamforce this year Salesforce announced Data Residency Option (DRO) where the customer retains ownership of sensitive data. DRO is a cloud gateway that protects customer data by encrypting or tokenizing sensitive data according to customizable policies before it is transmitted to Salesforce.
The DRO technology also known as Virtual Private SaaS (VPS) was developed by Navajo Systems which was acquired by Salesforce in Auguest 2011. Navajo was founded in 2009 by Dan Gross, Dr. David Movshovitz, Doron Abram, Ofer Shochet and Eitan Bauch and incubated in the JVP Media Labs Incubator in Israel.
How its done?
In case of encryption, data deemed sensitive is encrypted prior to transmission and encryption keys are stored locally and managed by the customers themselves. This protects against the risk of a 3rd party gaining unauthorized access to the data. When authorized users request the data from Salesforce, VPS reverses the process and presents a readable version of the data to the user.
In case of tokenization, sensitive data is substituted with randomly generated values prior to being stored in the Salesforce cloud. The mapping between the original data and tokens is stored in a secure database on premise. While tokenization adds the overhead of managing and securing a database, this satisfies residency requirements because the actual data doesnt leave the organization. Furthermore tokenized values cant be deciphered without access to the secure database. When users access Salesforce, VPS replaces all tokens contained within Salesforce’s responses to the user with their corresponding actual values.
VPS can reside on the premise, within a customer’s firewall and also be deployed in the cloud by Salesforce itself. In this case Salesforce users within a particular region use VPS without having to install or maintain it in their own networks. Here customers primary concern is data residency within a region, rather than privacy.
A similar technology is available from CipherCloud Customers interested in other cloud providers such as Amazon AWS, Box.net can user CipherCloud to protect their sensitive data while enjoying the benefits of public cloud.
The concept that Navajo or CipherCloud have implement is simple, yet very powerful because it addresses the key concern of security that companies have when adopting public cloud. I belive that all SaaS vendors should provide this option to help customers cross the bridge over to public clouds.
On July 27th former NASA CTO Chris Kemp along with Steve O’Hara and Devin Carlen launched a start-up called Nebula, named after a project that Kemp started at NASA. The startup was seeded by Google’s first investors Andy Bechtolsheim, David Cheriton and Ram Shriram and has secured venture financing from Kleiner Perkins Caufield & Byers and Highland Capital Partners. This post digs into what the company is building and its positioning
What are they building?
With over 90 companies supporting OpenStack, there is a growing demand from enterprises for technology solutions built on OpenStack. Nebula will be building a turnkey OpenStack hardware appliance, which will run the OpenStack compute and object storage controllers. The appliance includes a 48-port 10 GE switch giving the Nebula cloud controller direct control over the network.
While the hardware appliance allows Nebula to optimize and lock down some of the variables, Nebula also locks down the servers that can connect to the appliance. Only servers that have been certified by Facebook’s Open Compute open source server project or Dell’s PowerEdge-C server family will be supported. Thus Nebula has narrowed its target market to customers buying new hardware or upgrading hardware for their data centers. This strategy may not work in the current economic environment where customers need to make the most of what they already have.
Additionally the current Cactus release of OpenStack Compute has some of the key gaps such as Billing and Logging, Identity Management, Self-Service Portal to signup for service, track their bills and lodge trouble tickets, Monitoring, Policy Management, Scheduling (http://docs.openstack.org/cactus/openstack-compute/admin/content/nova-conceptual-mapping.html).
On a positive note, some of these gaps may be addressed in the next release of OpenStack, Diablo. As one of the creators of OpenStack, Nebula is also in a unique position to fill in some of these gaps, such as security and compliance. Additionally, Nebula can scale to 1,024 appliances daisy chained together, supporting up to 24,576 server nodes or up to 300,000 virtual machines at a 1 VM/core density, giving customers the ability to build a large scale cloud. Kemp’s ultimate design goal for is 1 million hosts and 60 million VMs.
Who are they going to sell to?
Corporations, service providers, VARS, SMBs, researchers, and global data centers looking to deploy large-scale cloud deployments for private or public clouds. The pricing for the product has not been set yet, but is likely to be competitive with 10GE top-of-rack switches. The product will hit the market in 1Q2012.
Are there alternatives?
Nebula has the reputation of its founders and the momentum of OpenStack that can catch a customer’s eye! But Nebula will compete head to head with existing cloud solutions in the market as VCE’s Vblock, Microsoft Azure appliance, IBM’s CloudBurst, HP CloudSystem and new solutions based on OpenStack such as those developed by Dell and Piston Cloud Computing , a company founded by Joshua McKenty, the cloud architect of NASA’s Nebula cloud infrastructure.
According to NIST ( http://1.usa.gov/eZ8PSn ), cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model promotes availability and is composed of five essential characteristics: On-demand self-service, Broad network access, Resource pooling, Rapid elasticity, Measured Service (metering and billing).
In other words cloud computing is the ability to deliver a dynamic computing environment where resources (infrastructure, development or software) can be obtained on-demand, scaled-up or scaled down, and paid for by usage. By creating resource pools that can be shared across users, it eliminates the need to have resources dedicated for certain uses, thereby saving money and energy. The ability to obtain resources on demand creates a more responsive and productive IT environment.
This dynamic IT environment can be delivered by a provider for general public use (public cloud) or operated solely for an organization (private cloud). A Hybrid cloud brings together private and public clouds which may be used for bursting and load balancing between clouds.