Posted:
Google Cloud Platform improves as a result of extensive collaboration--including collaboration with users. In particular, user research studies help us improve our cloud platform by allowing us to get feedback directly from cloud and IT administrators around the world.

We’d like to invite you today to join our growing pool of critical contributors. Simply fill out our form and we’ll get in touch as user research study opportunities arise.

During a study, we may present you with and gather your feedback on Google Cloud Platform, a new feature we’re developing, or even prototypes. We may also interview you about particular daily habits or ask you to keep a log of certain activity types over a given period of time. Study sessions can happen at a Google office, in your home or business, or online through your computer or mobile device:
  • Usability study at a Google office: for those that live local to one of our offices. Typically, you’ll come visit us and meet 1-on-1 with a Google researcher. They’ll ask you some questions, have you use a product, and then gather your feedback on it. The product could be something you’re rather familiar with or some never-before-seen prototype.
  • Remote usability study: Rather than have you visit our offices, a Google researcher will harness the power of the Internet to conduct the study. Basically, they’ll call you on the phone and set up a screen sharing session with you on your own computer. You can be almost anywhere in the world, but need to have a high-speed Internet connection.
  • Field study: Google researchers hit the road and come visit you. We won't just show up at your door though – we’ll always check in with you first, talk to you about the details of the study and make a proper appointment.
  • Experiential sampling study: These studies would require a small amount of activity every day over the course of several days or weeks. Google researchers will ask you to respond to questions about a product, or make entries in a diary document about your use of a product, using your mobile phone, tablet, or laptop to complete the study questions or activities.

After the study, you'll receive a token of our appreciation for your cooperation, such as a gift card. Sharing your experiences with us helps inform our product planning and moves us closer to our goal of building a cloud platform that you'll love.

More questions? Check out our FAQs page to learn more about our user research studies.

- Posted by Google UX Research Infrastructure Team

Posted:
Founded in 2012, Energyworx offers big data aggregation and analytics cloud-software services for the energy and utilities industry. Their products and services include grid optimization and reliability, meter-data management, consumer engagement, energy trading and environmental-impact reduction. They are based in the Netherlands. To learn more, visit www.energyworx.org

Getting all cloudy gives you a tremendous amount: Agility, scalability, cost savings and more.  The scales weigh heavily in favor of embracing cloud goodness.  However, on the other side of that scale, getting all cloudy means giving up a degree of control.  You don’t control the infrastructure and, in certain cases, you don’t know the implementation behind APIs you rely on.  This is especially true of managed services such as databases and message queues, and those APIs and associated SLAs are central to the operation of your systems.  There’s nothing surprising, bad or wrong about this situation, as stated previously there are far more pros than cons with the cloud, but as engineers whose reputation (and need for a night’s sleep uninterrupted by a 3am wake up call) rely on the stability and scalability of the systems we build, what do we do?  We follow the age old maxim, trust but verify, and verify by testing!

Testing comes in many forms but broadly there are two types, functional and stress testing.  Functional tests check for correctness.  When I register for your service does my email address get encrypted and correctly persisted?  Stress tests check for robustness.  Does your service handle 100,000 users registering in the fifteen minutes after it’s mentioned in the news?  As an aside, I was tempted as I wrote this post to phrase everything in terms of “we all know this…” and “of course we all do that..” when it comes to testing because we do all know it’s a good thing to do and we all do it to one extent or another but the number of issues good engineers face with scalability issues is proof that the importance of stress testing isn’t a universally held truth, or at least a universally practiced truth.  The remainder of this post focuses on a set of best practices we distilled from a stress testing exercise we did in Google Cloud Platform with Energyworx as part of their go live.

Energyworx and Google Cloud Platform leveraged existing Energyworx REST APIs together with Grinder to stress test the system.  Grinder allows the calls to the REST APIs to be scaled up and down as required depending on the type and degree of stress to be applied.  Test scenarios were based around scaling the number of smart meters uploading data, the amount of work performed by the meters and physical locations of the meters.  For example, we knew a single meter worked correctly so let’s try several hundred thousand meters working at the same time, or let’s have a meters running Europe accessing the system in the US, or let’s have thousands of meters do an end of day upload at the same time.  Following these best practices Energyworx ran extended 200 core tests for approximately $10 a time and proved that their system was ready for millions of meters flooding the grid daily with billions of values.  We were right and Energyworx launch went off without a hitch.  Stress testing is a blast…

First best practice is to leverage Google Cloud Platform to provide the resources to stress test.  To simulate hundreds of thousands of smart meters (or users, or game sessions, or other stimuli) takes resources and Google Cloud Platform allows you to spin these up on demand, in very little time and pay by the minute for them.  That’s a great deal for stress testing.

Second best practice is that systems are often complex, with different tiers and services interacting and it can be tough to predict how they will behave under stress, so use stress testing to probe the behavior of your system and the infrastructure and services your system relies upon.  Be creative with your scenarios and you’ll learn a lot about your system’s behavior.

Third best practice is that you should test the rate of change of the load you apply as well as the maximum load.  What that means is that it’s great to know your system can handle a load of 100K transactions per second but it’s still not a useful system if it can only handle these in batches of 10K increases each minute for 10 minutes when a single news article from the right expert can bring you that much traffic in the web equivalent of the blink of an eye.

Fourth best practice is that you should test regularly.  If you release each Friday and bugfix on demand, you don’t need to stress test every time you release but you should stress test the entire system every 2-4 weeks to ensure that performance is not degrading over time.

- Posted by Corrie Elston, Solutions Architect

Posted:
From bringing people together at the World Cup, to improving the way employees talk to each other, Google Cloud Platform Services Partners help customers unlock the full potential of our products.
To help our partners focus more on their customers’ experiences, we are pleased to announce that we’re now accepting applications for a reselling option from eligible, existing Google Cloud Platform services partners and we anticipate expanding to new partner program applicants in early fall.

As a reseller of Cloud Platform, partners will be able to provision and manage their customers via the new Cloud Platform reseller console. Google Cloud Platform resellers will:
  • Fully manage  their customers’ Google Cloud Platform experience, from onboarding through implementation
  • Provide the first line of support and be responsible for customer problem resolution
  • Provide customers with a billing service that matches their specific requirements and in local currency

The ability to resell will be especially beneficial to partners aiming to bundle multiple Cloud Platform services and present one consolidated bill to their customers.

The reseller console showcases deep insights into our customers' engagement with the platform, allowing us to make informed recommendations in terms of best practices and opportunities available to our customers. As a trusted solutions partner, it's paramount for us to provide white glove services to make their transition to the cloud as seamless as possible."
           -- Tony Safoian, Sada Systems CEO
If you’re an existing services partner and want to learn more about your organization's eligibility for reselling, visit our application page on Google for Work Connect. And if you’re new to Google Cloud Platform and interested in becoming a services partner, visit our site at cloud.google.com/partners.

- Posted by Adam Massey - Director, Global Partner Business

Posted:
While containers make packaging apps easier, a powerful cluster manager and orchestration system is necessary to bring your workloads to production.  Today, Google Container Engine is generally available and production ready, backed by Google’s 99.5% service level agreement.  Container Engine makes it easy for you to set up a container cluster and manage your application, without sacrificing infrastructure flexibility.  Try it today.

Set Up a Managed Container Cluster in a Few Clicks
With Container Engine, you can create a managed cluster that’s ready for container deployment, in just a few clicks. Container Engine is fully managed by Google reliability engineers, so you don’t have to worry about cluster availability or software updates.

Container Engine also makes application management easier.  Your cluster is equipped with common capabilities, such as logging and container health checking, to give you insight into how your application is running.  And, as your application’s needs change, resizing your cluster with more CPU or memory is easy.


Image result for porch logo
“We chose Kubernetes to get the most out of our application infrastructure, and we chose to move to Google Container Engine from another cloud provider to get the most out of Kubernetes. Our infrastructure on Container Engine runs at about 40% of its original deployment on the other cloud provider, and Google’s sustained use discounts and per minute pricing have led to further cost savings.”

-- Jay Allen, Porch CTO

Declarative Container Scheduling and Management
Many applications take advantage of multiple containers; for example, a web application might have separate containers for the webserver, cache, and database.  Container Engine is powered by Kubernetes, the open source orchestration system, making it easy for your containers to work together as a single system.

Container Engine schedules your containers into your cluster and manages them automatically, based on requirements that you declare.  Simply define your containers’ needs, such as the amount of CPU/memory to reserve, number of replicas, and keepalive policy, and Container Engine will actively ensure requirements are met.

“The declarative nature of Kubernetes has proven to be very powerful in streamlining and simplifying spinning up application components, which include Django, Geoserver, ELK stack, Redis, PostGIS, and then GeoMesa interfacing with our Google Cloud Bigtable instance.“

-- Tim Kelton, co-founder Descartes Labs.

Cloud Flexibility with Kubernetes
Most customers live in a multi-cloud world, using both on-premises and public cloud infrastructures to host their applications.  With Red Hat, Microsoft, IBM, Mirantis OpenStack, and VMware -- and the list keeps growing -- integrating Kubernetes into their platforms, you’ll be able to move workloads, or take advantage of multiple cloud providers, more easily.  Container Engine and Kubernetes provide you with flexibility, whether you use on-premises, hybrid, or public cloud infrastructure.


“When we implemented our new, microservice-based container architecture, we chose Kubernetes because we needed a single, simple, standardized runtime platform that we could easily and quickly deploy across multiple environments. We use Container Engine in conjunction with our own infrastructure (powered by Mirantis OpenStack) and other public clouds to diversify our infrastructure risk.”

-- Lachlan Evenson, Lithium Cloud Platform Engineering

Ready for Production
Everything at Google, from Search to Gmail, is packaged and run in a Linux container. Each week we launch more than 2 billion container instances across our global data centers.  Container Engine represents the best of our experience with containers and we’re excited for you to give it a spin.  Get started with Container Engine today.

As a very small token of thanks for your support, we’re giving away 1,000 Container Engine t-shirts.  Simply be one of the first 1,000 people to tweet @googlecloud with the hashtag #imakubernaut about why you love Container Engine or Kubernetes and get a t-shirt (some conditions apply).

- Posted by Craig Mcluckie, Product Manager

Posted:
We recently announced that Google Cloud Dataflow and Google Cloud Pub/Sub graduated to general availability. You can now leverage these easy to use and inexpensive large-scale fully managed big data services with Google BigQuery to find valuable business information and insights.

BigQuery is a No-Ops analytics database that seamlessly scales in seconds, requires no instance or cluster management, offers unbeatable performance out-of-the-box, and lets you pay only for what you consume. Today, we’re releasing a new version of BigQuery that is easier to use, more powerful and more open.

With new features such as User-Defined Functions (UDFs) and an improved user interface (UI), BigQuery is now simpler and easier to use.
  • User-Defined Functions (UDFs). Expressed in Javascript, UDFs allow you to extend SQL and execute arbitrary code within BigQuery. For example, you can now easily express complex conditional logic in your queries and you have much more flexibility than provided by regular expressions. Head over to our documentation to learn more.
  • Query files in Google Cloud Storage from BigQuery. It is now possible to run queries without loading files into BigQuery first. This functionality also simplifies data import into BigQuery.  In addition to the existing straight “import” mechanism, you can now write queries which read from Cloud Storage files and write the results to BigQuery tables. Federated query documentation offers more details.
  • Increased query limits. You will now be able to run 50 simultaneous queries, and 100,000 queries per day (up from 20 and 20,000). In addition, there will no longer be limits on “maximum simultaneous bytes processed” and “maximum simultaneous large queries”. These changes give you more freedom within the BigQuery ecosystem.
  • UI Improvements. We’ve added several new features, including a new “Format Query” button, automatic organization of date-sharded tables, and the ability to download query results in JSON.

We also wanted to make BigQuery more powerful and performant to help you save time and increase productivity.
  • Dynamic query optimization. Improves reliability and performance for complex queries such as large JOIN or GROUP BY operations.  You can expect to see your project activated in the coming weeks.  Users will no longer need to specify the EACH keyword, which greatly simplifies the writing of queries, particularly for applications that programmatically generate SQL such as visualization tools and dashboards.
  • Enhancements to the query execution engine will result in increased performance and scale of queries that use lots of resources, such as large JOINs, analytic functions, and high-cardinality aggregations.

And lastly, we added new features to BigQuery to be more open.
  • BigQuery Slots. One unique feature of BigQuery is the ability to dip into the vast shared pool of resources to scale into thousands of cores for a query. BigQuery Slots offer customers the ability to expand and allot the resources available to them, regardless of system load. Use cases include latency-sensitive SaaS, ETL, and business reporting workloads.
  • High-Compute Pricing Tiers. With release of UDFs, Dynamic query optimization, and execution engine improvements, BigQuery now supports queries that consume large amounts of compute resources relative to “bytes scanned”. To enable this higher resource consumption, we are introducing High Compute Pricing Tiers. For more information, head over to our pricing page.

BigQuery is fully-managed by Google, so customers automatically get all these benefits right away. Make use of the of better UI, better performance, and additional functionalities - no action needed, and no downtime. Solve your big data problems the way we solve ours!

Learn how BigQuery can help you, take a look at the documentation, and try it out! First terabyte processed is on us!

- Posted by Tino Tereshko, Technical Program Manager

Posted:
Do you have backup tapes sitting in a local closet or some third party storage facility? If yes, I have some good news for you because you can no longer afford to let this data sit on a shelf and collect dust.

To help you stay competitive and make it easy to import your old data backups to the cloud, we’re introducing Offline Media Import/Export. This is a solution that allows you to load data into any Google Cloud Storage class (Standard, DRA and Nearline) by sending your physical media -- such as hard disk drives (HDDs), tapes, and USB flash drives -- to a third party service provider who uploads data on your behalf. Offline Media Import/Export is helpful if you’re limited to a slow, unreliable, or expensive Internet connection. It’s also a great complement to the newly released Google Cloud Storage Nearline, a simple, low-cost, fast-response storage service with quick data backup, retrieval and access.

Offline Media Import/Export is fast, simple and can include a chain-of-custody process.
It’s faster than doing it yourself: Popular business DSL plans feature download speeds that exceed 10Mbps (megabits per second). However, upload speeds generally top out at 1Mbps, with most plans providing just 768kbps (kilobits per second) for upload. This means that uploading a single terabyte (TB) of data will take more than 100 days! This also assumes that no one else is using the same network connection. With Offline Media Import/Export, this process can now be completed in days instead of months.

It’s simple: Save and encrypt your data to the media of your choice (hard drives, tapes, etc.) and ship them to the third party service provider through your preferred courier service.

It’s protected: The encrypted data will be uploaded to Google Cloud Storage using high speed infrastructure. Third party service providers like Iron Mountain can offer a chain-of-custody process for your data. Once data upload is complete,  Iron Mountain can send the hard drive back to you, store it within their vault or destroy it.

Get Started!
More information can be found on the “Offline Media Import / Export” webpage.

- Posted by Ben Chong, Product Manager

Posted:
Networking. It’s one of the most critical elements of a datacenter, connecting machines, applications and locations to one another to transfer information, data and documents. Its what enables your mobile device to provide you with access to your email, to send messages to your friends, to post photos to social networks and to check in at the places you visit.

And yet the only time you think about it is when it’s not there!

Amin Vahdat, Google’s Technical Fellow for Networking posted today on the Google Research blog details about the investments that Google has made in networking in order to deliver on our stated mission to organize the world’s information and to make it universally accessible.  As Amin states, ten years ago we realized that we could not purchase, at any price, a datacenter network that could meet the combination of our scale and speed requirements.

So we built our own!

To date, we have built and deployed five generations of our datacenter network infrastructure. Our latest-generation Jupiter network has improved capacity by more than 100x relative to our first generation network, delivering more than 1 petabit/sec of total bisection bandwidth. To put this in perspective, this provides capacity for 100,000 servers to exchange information at 10Gb/s each, enough to read the entire scanned contents of the Library of Congress in less than 1/10th of a second.

Here is a look at the hardware innovations over the years:
Firehose (2005-2006)
  • Chassis based solution (but no backplane)
  • Bulky CX4 copper cables restrict scale
Copy of FH_cabling (1).jpg
firehose1.1_chassis.png
WatchTower (2008)
  • Chassis with backplane
  • Fiber (10G) in all stages
  • Scale to 82 Tbps fabric
  • Global deployment
WT2.jpg
Saturn (2009)
  • 288x10G port chassis
  • Enables 10G to hosts
  • Scales to 207 Tbps fabric
  • Reuse in WAN
Pluto.png.jpg
WTbundles.jpg

Saturn_chassis.png
Jupiter (2012)
  • Enables 40G to hosts
  • External control servers
  • OpenFlow

All of this innovation in networking is available to you as a customer of Google Cloud Platform through Cloud Networking. Cloud Networking provides three key capabilities to our customers:
  • Cloud Interconnect - connect your datacenter to ours through an encrypted VPN, Direct Peering or via your Carrier.
  • Load Balancing - spread load across applications with HTTP/S or across machines with TCP/UDP.
  • Cloud DNS - Reliable and low latency DNS serving from Google's worldwide network of Anycast DNS servers

You can learn more about Cloud Networking at cloud.google.com/networking and in the documentation. You can also dive into the technical details on five generations of our in-house data center network architecture by downloading the whitepaper.

- Posted by Adam Hall, Head of Technical Product Marketing, Cloud Platform