Cloud Foundry Service Broker for GCP: What’s in It for Machine Learning?

by Sophia TurolJuly 17, 2017
A Cloud Foundry service broker for Google Cloud Platform enabled developers to access a variety of APIs and services. This session reviews some of them.

With the development of a service broker to GCP, Cloud Foundry users can now work with BigQuery, Bigtable, Cloud SQL, Cloud Storage, machine learning APIs, PubSub, Spanner, Stackdriver Debugger, and Stackdriver Trace. At Cloud Foundry Summit 2017, presenters from Google and Pivotal focused on machine learning APIs available and live debugging.

 

A plentiful of machine learning APIs

Colleen Briant is part of a Google team that maintains open-source integrations with the company’s cloud platform. Firstly, she introduced a bunch of machine learning APIs that are available through a service broker to the Google Cloud Platform.

For instance, Translation API, which aids users in dynamically translating text into thousands available language options. This API can be of significant help if one needs to quickly translate a web page or a whole website.

colleen-briant-google-cloud-foundry-summit-2017

“So, a possible use case for that [Translation API] would be just translating your own website. So, just upload your HTML pages, run them through the API, and out pops a globally accessible website.” —Colleen Briant, Google

Translation API’s documentation
 

powerful-text-analysis-google-cloud-natural-api

Natural Language API, another one of the Google Cloud API family, allows for analyzing text structure and meaning. With it, one can extract information from text documents, news articles, or blog posts about the subjects of interest. The API proves efficient in understanding sentiment—e.g., the feedback on a product posted on social media—and parsing intent, let’s say, from customer conversations with a call center/virtual assistant.

“So, something you could use this for and at your own company would be to process customer reviews, pick out the ones that are especially positive, and try to highlight those phrases on your next marketing campaign.” —Colleen Briant, Google

Natural Language API’s documentation
 

speech-api-lead-google-cloud-speech-api

Cloud Speech API converts audio to text, distinguishing between 80+ languages. The common capabilities include text transcription—being dictated through an app’s microphone—and voice command-and-control.

“Say, you have a phone system that you route to for customer support and you detect that your user is really struggling to use the system. May be the reason is they are trying to speak a language that your system is not in. So, you take a little audio sample, run it through the API, and try to route it to an agent that can assist them in their native tongue.” —Colleen Briant, Google

Cloud Speech API’s documentation
 

google-video-intelligence-api

The newest API to join the Google collection is the Video Intelligence one that extracts metadata from videos, enabling users to search any moment of a video file and find the occurrence of interest and identify its significance. The API allows for annotating those videos stored in the Google Cloud Storage, as well as detecting key nouns entities and the time of their occurrence in the video. Furthermore, one can separate signal from noise by retrieving relevant information (e.g., per frame).

As Colleen put it, one can drive business insights by holding a competition for users, letting them upload videos of interacting with a product. Video Intelligence API will then pick out the moments of product showcasing and collect the snippets, which can be further employed for business needs.

Video Intelligence API’s documentation

 

Cloud Vision API is designed to classify images into categories, as well as detect individual objects/faces and recognize printed words in an image. The perks of using this API include:

google-cloud-vision-api

  • building metadata on an image catalog
  • moderating offensive/explicit content
  • enabling new marketing scenarios via image sentiment analysis

Being a furniture retailer, for instance, one can inspire customers to upload pictures of the products they’ve bought from other retailers and apply pallette detection to compare them against your offer and make suggestions.

Cloud Vision API’s documentation

 

Just combine it!

As standalone tools, the above-mentioned APIs are simply cool, but the true power surely comes with the combination of them. A good example is Google Translate app, which functionality can be simulated by utilizing Cloud Vision API to get the text out of an image and actually translate it into the necessary language via Translation API.

google-natural-language-translationHow Google’s Neural Machine Translation works (Source)

Colleen also suggested the scenario of applying Cloud Speech and Natural Language APIs to customer voicemails to gather general sentiment and analyze whether they seem happy—and just record their kudos—or need a call back to help out in their situation.

She highlighted the benefits of being able to control Twitter for the tweets related to one’s product. In this case, such tweets could be run through Natural Language API to enable sentiment analysis. The next step would be to check the image for adult/fraud/whatever harmful content via Cloud Vision API. In a nutshell, it would be an easy way to identify “marketable” things to make use of in the next campaign.

 

Debugging with Stackdriver

Colleen and Mikey Boldt of Pivotal demonstrated an app that scrapes subreddit aww, which is basically a collection of all sorts cuties and fluffies (puppies, kitties, hamsters, or whatever you’re into). The app extracts images from Reddit into the Google Cloud Storage and runs Cloud Vision API over them. Using the API allowed for:

  • Searching where else those very images are available on the web
  • Generating the image color pallette
  • Making sure the image has no explicit content
  • Storing all the image-related information in the JSON format within the API

google-machine-learning-apis-cloud-foundry-summit-v11A reference architecture of a sample app (Source)

A top tag or label—returned by the API—is saved and presented to a user together with the image. You can check out the app in this GitHub repo.

Then, Mikey and Collen showcased how to enable live-debugging with this app already pushed to Cloud Foundry and Stackdriver debugger integrated through a Java buildpack. (According to Mikey, support for Python and Go is on the way.)

colleen-briant-google-michael-boldt-cloud-foundry-summit-2017-machine-learning-apis-stackdriver

Stackdriver debugger, a tool offered as a part of the Google Cloud Platform, allows for inspecting an app’s state at any code location using no logging statements and no need to stop or slow down the application. The debugger’s core features are:

  • Debugging in production. As already mentioned, no logging statements need to be added. What one does is snapshotting the app’s state and linking it back to source code. The local variables and call stack at a specific line location are captured the very first time any instance executes the code.
  • Multiple source options or no source code at all. Is it even legal? Well, if it happens so you don’t have access to the source code, all you need is a file name and a line number to take the snapshot. What else one can do is upload the source to Debug, connect the Debug to local source files, or use a cloud source repository of choice (e.g., Google Cloud Source Repository, GitHub, or Bitbucket).
  • Team collaboration through sharing a debug session.
  • Integration into existing developer workflows. On launching the debugger, one can take snapshots directly from logging and error-reporting dashboards, IDEs, and the command line interface.

stackdriver-debugger-view-a-snapshotStackdriver’s console view (Source)

Stackdriver’s debugging functionality is available to Cloud Foundry through a Nozzle and its analogue for Pivotal CF. The debugger, as well as machine learning APIs, is integrated through the GCP Service Broker. (Check out this GitHub repo for the service broker’s integration with open-source Cloud Foundry, as well as documentation for Pivotal CF’s integration.) For technical details, check out the Stackdriver debugger’s documentation.

 

Want details? Watch the video!

Table of contents
  1. What machine learning APIs are available within the Google Cloud Platform? (00:33)
  2. What does Translation API do? (01:14)
  3. How does Natural Language API function? (01:45)
  4. How does Cloud Speech API work? (02:07)
  5. What does Video Intelligence API offer? (02:48)
  6. What can one do with Cloud Vision API? (03:38)
  7. What comes out of a combination of these APIs? (04:24)
  8. Demo: Debugging a Cloud Foundry-based app with Stackdriver (05:43)
  9. Q&A: Is it possible to add a non-public repository to the Stackdriver code import? (16:25)
  10. Q&A: Can one use Stackdriver if an app is not running on the Google Cloud Platform? (19:44)

 
You can also watch a session by Google’s Jeff Johnson, covering how to monitor Cloud Foundry infrastructure and apps with Stackdriver and BigQuery.

 

Related reading

 

About the experts

collen-briant-bio1
Colleen Briant is a software engineer at Google. She keeps Cloud Foundry developers and operators on the cutting edge of Google Services with the GCP Service Broker and other open-source integrations. Prior to Google, Colleen has worked for Booking.com as a full-stack engineer, and Yapta—a provider of airfare price assurance services—where she was mostly engaged in designing a customer web portal with reporting functionality. Colleen holds a BS in Industrial Engineering and a bachelor’s degree in Computer Science from the University of Washington.

 

michael-boldt-bio
Mikey Boldt is a platform engineer at Pivotal. Prior to this, he was an owner of a software development company that focused on delivering IT services for small businesses. Mikey also has a background in scientific research around cybersecurity, formal verification, and artificial intelligence projects.