Raspberry Pi gets 6x the power, 2x the memory and still costs $35

Makers, academics and generally anyone who likes to play with computers: get ready for some awesomesauce. Raspberry Pis, the tiny Linux computers that currently sell for $35 are getting a makeover that will give a tremendous boost to their compute power and double their memory while still keeping their price the same.

The Pi 2 boards will be available today, and Pi creator and CEO of Raspberry Pi (Trading) Ltd. Eben Upton says the organization has already built 100,000 units, so buyers shouldn’t have to wait like they did at the original Pi launch. The Pi 2 will have the following specs:

  • SoC : Broadcom BCM2836 (CPU, GPU, DSP, SDRAM, and single USB port)
  • CPU: 900 MHz quad-core ARM Cortex A7 (ARMv7 instruction set)
  • GPU: Broadcom VideoCore IV @ 250 MHz, OpenGL ES 2.0 (24 GFLOPS), 1080p30 MPEG-2 and VC-1 decoder (with license), 1080p30 h.264/MPEG-4 AVC high-profile decoder and encoder
  • Memory: 1 GB (shared with GPU)
  • Total backwards compatibility (in terms of multimedia, form-factor and interfacing) with Pi 1

This is a significant expansion of the Pi’s capabilities, although I’ve stopped being surprised at how far hobbyists have taken the original platform. In a blog post for Broadcom, Upton wrote:

[blockquote person=”” attribution=””]Raspberry Pi 2 has enough raw computing power to do anything that a PC can — surf the Web, word processing, spreadsheet algorithms and more; we expect to see a lot of you using it as a general-purpose productivity machine. We’re really pleased with it — and we think that our community of fans, developers, educators and industrial customers will agree.
[/blockquote]

When I emailed Upton to ask how he managed to keep the price so low while adding so much to the performance he said that shaving off a few cents on other components paid off. “We were able to hold the? price by paying a lot of attention to the little things (the price of an HDMI connector, the exact finish on the PCB),” he wrote. “We ended up finding a few tens of things each of which saved $0.10, and then spending all those savings in one go on more RAM and CPU performance.”

The Pi 2 uses a Broadcom chip, much like the original Pi did. The new Broadcom SoC is called the BCM2836 and it has the same VideoCore multimedia with a lot more CPU power.

And for those in the U.S. hoping to see more Pi action in their kids’ schools, Upton also told me that the Foundation has hired its first U.S. employee and is hoping to do a lot more with the U.S. education system in 2015. That’s great news, because Upton actually created the Pi with kids in mind. His goal was to get them excited about hardware, coding and computers the way he was inspired back in the day by the Commadore 64 platform. You can check out his commentary on this and more form his appearance at one of our conferences in 2013. It’s an excellent talk.

[youtube https://www.youtube.com/watch?v=emQuoPF3Rsc&w=560&h=315]

US weather agency to boost supercomputers to 2.5 petaflops each

The National Oceanic and Atmospheric Administration (NOAA) plans to upgrade the performance of its two supercomputers with a roughly tenfold increase of capacity by October 2015, the agency said Monday. With the upgrade, the agency is hoping for more accurate and timely weather forecasts.

The supercomputer upgrade comes courtesy of a $44.5 million contract with [company]IBM[/company], which is subcontracting with Seattle-based supercomputer-maker Cray Inc. to improve the systems. Of that $44.5 million, the NOAA said that $25 million “was provided through the Disaster Relief Appropriations Act of 2013 related to the consequences of Hurricane Sandy.”

The National Weather Service (part of NOAA) will reap the benefits this month when the two supercomputers triple their current total capacity from 0.776 petaflops to 1.552 petaflops as part of the first step of the overhaul. With the bump in power, the National Weather Service will be able to run an upgraded version of its Global Forecast System with better resolution and longer weather forecasts.

Global Forecast System

Global Forecast System

When the upgrade is finished, each supercomputer should be able to handle a capacity of 2.5 petaflops, which makes for a total capacity of 5 petaflops.

While that’s a sizable increase of capacity, the world’s fastest supercomputer, China’s Tianhe-2, can deliver 55 peak petaflops.

In November, IBM announced that it would build two new supercomputers based on IBM’s OpenPower technology for the U.S. Department of Energy. Those new supercomputers should be functional by 2017 and will supposedly deliver more than 100 peak petaflops.

Meteor wants to be the warp drive for building real-time apps

For most organizations, building a modern-day cloud application that rivals something as clean and fast as Uber or Facebook is, obviously, not an easy-to-do task. The art of crafting a responsive app that’s able to load up and transmit data in real time demands a developer team that’s skilled in multiple web frameworks and languages like Angular.js, Node.js and PHP.

Meteor Development Group wants to simplify this process, and it thinks the best way to do so is to build everything in JavaScript, the ubiquitous programming language that’s the backbone of web browsers. Meteor Development Group’s open-source project, dubbed Meteor, is essentially a souped-up JavaScript application framework that’s designed to make it easier for coders to create real-time apps like those found at big tech companies while appeasing enterprises who are more familiar with JavaScript than other languages.

“It’s a fresh design to how to build modern architecture out of JavaScript,” said Meteor Development Group founder Matt DeBergalis. “With the right design, you can build experiences like Uber with ten lines of code.”

Enterprises stuck in the past

The way DeBergalis explains it, twenty years ago, the best applications were found in the enterprise. With the advent of mobile computing and real-time applications like Uber and Facebook that are constantly sucking up and distributing data, however, consumers are now used to a type of real-time functionality that’s hard to find among enterprise apps.

“The enterprise is still stuck on Internet Explorer 8,” said DeBergalis. “Why can’t I see the financial reports on my phone? The answer is we [enterprises] can’t afford to write the thing you want.”

And that’s the crux enterprises face: It’s difficult to hire developers who have the skills to craft these type of complex applications, as it requires finding coders who are well-versed in multiple languages like Node.js, Ruby on Rails and the like. And once you find these talented developers, you have to cough up the cash that their skill set requires—hiring in the tech industry is competitive, as you probably already know.

Meteor Development Group founder Matt DeBergalis

Meteor Development Group founder Matt DeBergalis

But if coders are able to build real-time apps using their knowledge of JavaScript, enterprises may find it easier to acquire talent and it could boost the speed of building apps.

“We’ve completed a two-plus year development project that got us to a stable production-ready JavaScript platform that makes it dramatically faster to write apps,” said DeBergalis.

The promise of a real-time web framework

The Meteor framework is an example of what’s known as isomorphic JavaScript, a term popularized by Airbnb engineer Spike Brehm. When a programming language is labeled isomorphic, that basically means that the code can execute on both the server side (where storage systems and databases exists) as well as the client side (what the user sees when accessing an application).

With real-time applications like Uber’s, there can often be several different places where code is now running, instead of the past in which a simple desktop application would only have to interact with a web server to access data, explained DeBergalis.

[pullquote person=”Matt DeBergalis” attribution=”Matt DeBergalis, founder, Meteor Development Group” id=”902718″]With the right design, you can build experiences like Uber with ten lines of code.[/pullquote]

A modern, real-time application can potentially be comprised of multiple codebases (an Android application, an IOS application and a desktop application, for example), multiple APIs to ensure that all of those different codebases can speak to each other, and multiple databases. A web framework like Meteor essentially covers all of these areas and negates the need to have teams of specialists whose jobs are to maintain several different code bases, he said.

“The idea of isomorphic JavaScript is you want to use the same language and same API in all of those places,” said DeBergalis.

Because the application is now built on one single framework, it’s simpler to keep track of live updates. The JavaScript can watch for changes in a MongoDB database and “alert the programmer when information in that database changes,” DeBergalis said.

Keeping track of database changes is imperative for Meteor as it allows real-time syncing of data on different devices. The Meteor framework works by including “little cache servers next to each user” that are stored in-memory on the user’s device, DeBergalis explained.

Getting started with Meteor

Getting started with Meteor

These in-memory database cache servers are essentially connected to the main database servers stored at the home base, and every time a change in the database occurs due to a user request or transmission, the framework updates those small cache servers so that users get their data fed to them as quickly as possible.

What’s next for Meteor?

Since Meteor was founded in the summer of 2011, it’s gained a lot of traction with developers who are looking for a quicker way to build real-time apps. The 19-person company counts hot startups like Slack, Stripe and Respondly as users.

The startup also has the support of Andreessen Horowitz, who along with Matrix Partners, drove Meteor’s 2012 Series A funding round worth $11.2 million, which Gigaom’s Barb Darrow reported on back in 2012.

The next step for Meteor will be unveiling its long-awaited commercial product called Galaxy, although DeBergalis declined to state when it will be released. As the open-source Meteor framework targets developers, DeBergalis said Galaxy will be more operations-focussed and he described it as a “cloud service for running Meteor apps.”

Although DeBergalis wouldn’t spill the beans on what Meteor has in store for Galaxy, he indicated that the service will address the difficulties of running a real-time application across multiple data centers.

Meteor also only supports the MongoDB and Redis databases, and is working on including support for SQL, he said. Supporting multiple databases will be important for Meteor’s success, especially as many tech observers believe that no single database can satisfy all needs.

Meteor is also working on a port for Windows, which considering the recent open-sourcing of the .NET framework, DeBergalis feels will capture the attention of the [company]Microsoft[/company] developer community.

“What’s interesting is that developers on .NET today are looking for ways to get to the phone,” DeBergalis said. “If I’ve been a .NET developer for a couple of years, I would want to look for something new.”

Will big enterprises start to give Meteor a test drive, since as of now, it appears that it’s more of a startup commodity? While Meteor promises an easier way to build applications, it might be a chore for legacy companies to convert their old application infrastructure to the new framework, although DeBergalis said “there are ways that companies can retrofit their old applications to this new world.”

There’s no denying that the development world is changing and users are demanding fast-responding applications; with new frameworks like Meteor, catching up to this changing world could be less of a nightmare for enterprises.

How NASA launched its web infrastructure into the cloud

Among U.S. government agencies, the adoption of cloud computing hasn’t been moving full steam ahead, to say the least. Even though 2011 saw the Obama administration unveil the cloud-first initiative that called for government agencies to update their old legacy IT systems to the cloud, it hasn’t been the case that these agencies have made great strides in modernizing their infrastructure.

In fact, a September 2014 U.S. Government Accountability Office report on federal agencies and cloud computing explained that while several agencies boosted the amount of IT budget cash they spend on cloud services since 2012 (the GAO studied seven agencies in 2012 and followed up on them in 2014), “the overall increase was just 1 percent.” The report stated that the agencies’ small increase in cloud spending compared to their overall budget was due to the fact that they had “legacy investments in operations and maintenance” and were not going to move those over to the cloud unless they were slated to be either replaced or upgraded.

But there’s at least a few diamonds in the rough. The CIA recently found a home for its cloud on Amazon Web Services. And, in 2012, NASA contracted out with cloud service broker InfoZen for a five-year project worth $40 million to migrate and maintain NASA’s web infrastructure — including including NASA.gov — to the Amazon cloud.

This particular initiative, known as the NASA Web Enterprise Services Technology (WestPrime) contract, was singled out in July 2013 as a successful cloud-migration project in an otherwise scathing NASA Office of Inspector General audit report on NASA’s progress in moving to cloud technology.

Moving to the cloud

In August, InfoZen detailed the specifics of its project and claimed it took 22 weeks to migrate 110 NASA websites and applications to the cloud. As a result of the project’s success, the Office of Inspector General recommended that NASA departments use the WestPrime contract or a smilier contract in order to meet policy requirements and move to the cloud.

The WestPrime contract primarily deals with NASA’s web applications and doesn’t take into account high-performance computing endeavors like rocket-ship launches, explained Julie Davila, the InfoZen cloud architect and DevOps lead who helped with the migration. However, don’t let that lead you to believe that migrating NASA’s web services was a simple endeavor.

Just moving NASA’s “flagship portal” of nasa.gov, which contains roughly 150 applications and around 200,000 pages of content, took about 13 weeks to move, said Roopangi Kadakia, a Web Services Executive at NASA. And not only did NASA.gov and its related applications have to get moved, they also had to be upgraded from old technology.

NASA was previously using an out-of-support propriety content management system and used InfoZen to help move that over to a “cloudy Drupal open-source system,” she said, which helped modernize the website so it could withstand periods of heavy traffic.

“NASA.gov has been one of the top visited places in the world from a visitor perspective,” said Kadakia. When a big event like the landing of the Mars Rover occurs, NASA can experience traffic that “would match or go above CNN or other large highly traffic sites,” she said.

NASA's Rover Curiosity lands on Mars

NASA’s Rover Curiosity lands on Mars

NASA has three cable channels that the agency runs continually on its site, so it wasn’t just looking for a cloud infrastructure that’s tailored to handle only worst-case scenarios; it needed something that can keep up with the media-rich content NASA consistently streams, she said.

The space agency uses [company]Amazon[/company] Web Services to provide the backbone for its new Drupal content management system, and has worked out an interesting way to pay for the cloud, explained Kadakia. NASA’s uses a contract vehicle called Solutions for Enterprise-Wide Procurement (SEWP) that functions like a drawdown account between NASA and Amazon.

The contract vehicle takes in account that the cost of paying for cloud services can fluctuate based on needs and performance (a site might get a spike in traffic on one day and then have it drop the next day). Kadakia estimates that NASA could end up spending around $700,000 to $1 million for AWS for the year; the agency can put in $1.5 million into the account that can cover any unforeseen costs, and any money not spent can be saved.

“I think of it like my service card,” she said. “I can put 50 bucks in it. I may not use it all and I won’t lose that money.”

Updating the old

NASA also had to sift through old applications on its system that were “probably not updated from a tech perspective for seven-to-ten years,” said Kadakia. Some of the older applications’ underlying architecture and security risks weren’t properly documented, so NASA had to do an audit of these applications to “mitigate all critical vulnerabilities,” some of which its users didn’t even know about.

“They didn’t know all of the functionalities of the app,” said Kadakia. “Do we assume it works [well]? That the algorithms are working well? That was a costly part of the migration.”

After moving those apps, NASA had to define a change-management process for its applications so that each time something got altered or updated, there was documentation to help keep track of the changes.

To help with the nitty gritty details of transferring those applications to AWS and setting up new servers, NASA used the Ansible configuration-management tool, said Davila. When InfoZen came, the apps were stored in a co-located data center where they weren’t being managed well, he explained, and many server operating systems weren’t being updated, leaving them vulnerable to security threats.

Without the configuration-management tool, Davila said that it would “probably take us a few days to patch every server in the environment” using shell scripts. Now, the team can “can patch all Linux servers in, like, 15 minutes.”

NASA currently has a streamlined devops environment in which spinning up new servers is faster than before, he explained. Whereas it used to take NASA roughly one-to-two hours to load up an application stack, it now takes around ten minutes.

What about the rest of the government?

Kadakia claimed that moving to the cloud has saved NASA money, especially as the agency cleaned out its system and took a hard look at how old applications were originally set up.

The agency is also looking at optimizing its applications to fit in with the more modern approach of coupled-together application development, she explained. This could include updating or developing applications that share the same data sets, which would have previously been a burden, if not impossible, to do.

A historical photo of the quad, showing Hangar One in the back before its shell was removed. Photo courtesy of NASA.

A historical photo of the quad, showing Hangar One in the back before its shell was removed. Photo courtesy of NASA.

Larry Sweet, NASA’s CIO, has taken notice of the cloud-migration project’s success and sent a memo to the entire NASA organization urging other NASA properties to consider the WestPrime contract first if they want to move to the cloud, Kadakia said.

While it’s clear that NASA’s web services have benefited from being upgraded and moved to the cloud, it still remains hazy how other government agencies will follow suit.

David Linthicum, a senior vice president at Cloud Technology Partners and Gigaom analyst, said he believes there isn’t a sense of urgency for these agencies to covert to cloud infrastructure.

“The problem is that there has to be a political will,” said Linthicum. “I just don’t think it exists.”

Much like President Obama appointed an Ebola czar during the Ebola outbreak this fall, there should be a cloud czar who is responsible for overseeing the rejiggering of agency IT systems, he said.

“A lot of [government] IT leaders don’t really like the cloud right now,” said Linthicum. “They don’t believe it will move them in the right direction.”

Part of the problem stems from the contractors that the government is used to working with. These organizations like [company]Lockheed Martin[/company] and [company]Northrop Grumman[/company] “don’t have cloud talent” and are not particularly suited to guiding agencies looking to move to the cloud.

Still, as NASA’s web services and big sites are now part of the cloud, perhaps other agencies will begin taking notice.

Images courtesy of NASA

It’s all Docker, containers and the cloud on the Structure Show

[soundcloud url=”https://api.soundcloud.com/tracks/182147043″ params=”color=ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false” width=”100%” height=”166″ iframe=”true” /]

It’s safe to say that Docker has had a momentous year with the container-management startup gaining a lot of developer interest and scoring a lot of support from big tech companies like AmazonGoogle, VMware and Microsoft.

Docker CEO Ben Golub came on to the Structure Show this week to talk about Docker’s year and what he envisions the company to be as it continues to grow (hint: it’s aiming for something similar to [company]VMware[/company]). Golub also talks about Docker’s raft of new orchestration features and shares his thoughts on the new CoreOS container technology and how that fits in with Docker.

If you listened to our recent Structure Show featuring CoreOS CEO Alex Polvi and are curious to hear Docker’s reaction and perspective on Rocket, you’ll definitely want to hear this week’s episode.

In other news, Derrick Harris and Barb Darrow kick things off by looking at how Hortonworks and New Relic shares were holding up and the good news is — they’re doing pretty well at the ripe old age of 1 week.

Also on the docket, [company]IBM[/company] continues its cloud push by bringing a pantload of new data centers online — in Frankfurt (for the all-important German market) as well as Mexico City and Tokyo. In October, IBM said it was working with local partner Tencent to add cloud services for the Chinese market, which reminds us that Amazon Web Services Beijing region remains in preview mode.

 

Ben Golub, CEO of Docker

Ben Golub, CEO of Docker

SHOW NOTES

Hosts: Barbara Darrow, Derrick Harris and Jonathan Vanian

Download This Episode

Subscribe in iTunes

The Structure Show RSS Feed

PREVIOUS EPISODES:

Mo’ money, mo’ data, mo’ cloud on the Structure Show

Why CoreOS went its own way on containers

More from Facebook on its new networking architecture 

Do you find OSS hard to deploy? Say hey to ZerotoDocker

All about AWS Re:Invent and oh some Hortonworks and Microsoft news too

 

Reports: US to confirm North Korea behind the Sony hack

After much speculation, the U.S. government will reportedly confirm soon, as early as Thursday, that North Korea was responsible for the massive hack against Sony Pictures Entertainment, according to multiple reports from CNN, NBC and The New York Times. According to NBC, unnamed U.S. officials said that while the attacks originated outside the reclusive nation, the hackers were operating under orders from the North Korean government.

The mega hack, which started on November 24, took down Sony’s email systems and resulted in the leak of five movies — including Annie and To Write Love on Her Arms — as well as employee social security numbers, medical records and salary information. Private emails between Sony officials were also leaked and generated a lot of embarrassing attention for Sony.

The hack sent Sony on a downward spiral as it dealt with the ramifications of having private emails and sensitive documents unleashed to the public. On Tuesday, Sony employees filed a class-action lawsuit against the company for not providing enough security around their data and not taking the appropriate measures to protect them once their data was known to be breached.

On Wednesday, [company]Sony[/company] officially cancelled the December 25 release of the action-comedy movie The Interview, starring Seth Rogan and James Franco. The movie centers around a pair of Americans who have been assigned to assassinate Kim Jong-un, North Korea’s dictator-leader.

The decision to stop the movie’s screening came in light of a hacker group taking credit for the attack and indicating that some sort of violence would occur at theaters that play the movie.

North Korea previously denied that it was involved with the hack, but also seemed to enjoy the devastation it caused, according to a report in The New York Times.

In early December, a North Korean government spokesman told the BBC in response to the hack allegations, “The hostile forces are relating everything to the DPRK (North Korea). I kindly advise you to just wait and see.”

Rackspace joins the OpenPower Foundation

Rackspace is now an official member of the OpenPower Foundation, the IBM-created organization whose job is to help oversee IBM’s open-source chips; these chips are posed to give Intel’s x86 chips a run for their money. The cloud provider said in a blog post Tuesday that it will be working with partners to “to design and build an OpenPOWER-based, Open Compute platform” that it eventually aims to put into production. Rackspace now joins Google, Canonical, Nvidia and Samsung as another OpenPower member. In early October, IBM announced a new OpenPower-certified server for webscale-centric companies that comes with an IBM Power8 processor and Nvidia’s GPU accelerator.

Microsoft open sources cloud framework that powers Halo

Microsoft is continuing its open-source push, this time announcing that it will open source its Project Orleans cloud computing web framework. The framework has supposedly been “used extensively” in the Azure cloud and is best known for powering the first-person shooter video game Halo 4.

The Project Orleans framework, which was previously made available as a preview by Microsoft in April 2014, is built on .NET and was designed to make it easier for coders to develop cloud services that need to scale a lot. This makes sense given that Microsoft uses it for multiplayer-centric video games in which gamers are notified of what their friends are doing online and need their gaming statistics transmitted back and forth across thousands of servers in seconds.

Project Orleans is basically a distributed version of what’s known as the Actor Model, a type of concurrent computing model that names collections of software objects as actors that can communicate with one another and behave differently each time they get pinged to handle a request.

While there are already frameworks like Erlang and Akka in existence that take advantage of the Actor Model, users still have to do a whole lot of legwork in making sure that those actors stay online and can handle failure and recovery. The Project Orleans framework supposedly takes that complexity and actor management into account and lets users code distributed projects without having to worry about it.

From the Microsoft blog post:

[blockquote person=”Microsoft” attribution=”Microsoft”]First, an Orleans actor always exists, virtually. It cannot be explicitly created or destroyed. Its existence transcends the lifetime of any of its in-memory instantiations, and thus transcends the lifetime of any particular server. Second, Orleans actors are automatically instantiated: if there is no in-memory instance of an actor, a message sent to the actor causes a new instance to be created on an available server. An unused actor instance is automatically reclaimed as part of runtime resource management.
[/blockquote]

[company]Microsoft[/company] said the open sourcing of Project Orleans should be complete by early 2015; Microsoft Research will release the code under an MIT license and will post it on GitHub.

Avi Networks, fresh with $33M, aims to virtualize the network

Networking startup Avi Networks is ready to explain just how it virtualizes networking gear so that enterprises don’t have to rely on legacy equipment. The startup is also announcing that it has raised $33 million in venture capital since being founded in 2012.

Greylock Partners, Lightspeed Venture Partners and Menlo Ventures are all investors of the startup.

The Sunnyvale, California, startup is one of the many companies that’s taking a software approach to networking to stop the reign of legacy networking providers like [company]Citrix[/company] and [company]Riverbed[/company], which sell proprietary hardware.

Avi Networks can be considered a networks-function virtualization (NFV) player, explained its CEO and co-founder Umesh Mahajan. It wants to remove and improve upon the software that powers proprietary networking hardware, like application delivery controllers and load balancers, so that users can plug that software into cheaper, generic servers and still get the same capabilities.

What makes Avi Networks different than other vendors, like the recently launched Akanda, is the startup’s plan to target the load balancer appliance and the application delivery controller, which help distribute traffic across the data center and ensure that when a server goes offline, another one can pick up the slack. These devices, which sit in Layer 4 and Layer 7 of a network, are designed to make sure that applications are running well and performance isn’t suffering.

Avi Networks controller

Avi Networks controller

The technology is similar to what content-delivery network provider [company]Fastly[/company] is doing in its data centers. Fastly customized the software inside its [company]Arista[/company] switches and servers so that those devices now take on load balancing, thus negating the need to buy a load balancer appliance.

Avi Networks basically sells customized load-balancing software, sold on a “pay-as-you-use” model, that runs on standard x86 hardware, explained Dhritiman Dasgupta, the company’s vice president of marketing. As the load-balancing software collects and routes traffic, it also stores some of that data in an custom-built analytics engine that can help staff figure out what is going on when a networking problem occurs, he explained.

For example, using the analytics engine, users should be able to tell if a problem with a server might have caused an application to go down or if it was actually something with the application itself.

Companies often buy more gear than they actually need because they want to make sure that they can handle a spike in traffic, should it occur, Dasgupta said. Avi Networks supposedly cuts down on the necessity to buy more gear because its software can handle enormous amounts of traffic, and a user can cut back on the service if there’s not that much occurring.

“As user demand increases, you pay for what you use,” said Dasgupta. “As the demand goes away, we downgrade the service.”

Update: Story clarified to emphasize Avi Networks targeting load balancers and not switches or routers.