Serdar Yegulalp

About the Author Serdar Yegulalp


What’s new in TensorFlow machine learning

TensorFlow, Google’s contribution to the world of machine learning and data science, is a general framework for quickly developing neural networks. Despite being relatively new, TensorFlow has already found wide adoption as a common platform for such work, thanks to its powerful abstractions and ease of use.

TensorFlow 1.4 API additions

TensorFlow Keras API

The biggest changes in TensorFlow 1.4 involve two key additions to the core TensorFlow API. The tf.keras API allows users to employ the Keras API, a neural network library that predates TensorFlow but is quickly being displaced by it. The tf.keras API allows software using Keras to be transitioned to TensorFlow, either by using the Keras interface permanently, or as a prelude to the software being reworked to use TensorFlow natively.

To read this article in full, please click here

Read more 0 Comments

What’s new in Kubernetes 1.8: role-based access, for starters

The latest version of the open source container orchestration framework Kubernetes, Kubernetes 1.8, promotes some long-gestating, long-awaited features to beta or even full production release. And it adds more alpha and beta features as well.

The new additions and promotions:

  • Role-based security features.
  • Expanded auditing and logging functions.
  • New and improved ways to run both interactive and batch workloads.
  • Many new alpha-level features, designed to become full-blown additions over the next couple of releases.

Kubernetes 1.8’s new security features

Earlier versions of Kubernetes introduced role-based access control (RBAC) as a beta feature. RBAC lets an admin define access permissions to Kubernetes resources, such as pods or secrets, and then grant (“bind”) them to one or more users. Permissions can be for changing things (“create”, “update”, “patch”) or just obtaining information about them (“get”, “list”, “watch”). Roles can be applied on a single namespace or across an entire cluster, via two distinct APIs.

To read this article in full or to leave a comment, please click here

Read more 0 Comments

What’s new in MySQL 8.0

MySQL, the popular open-source database that’s a standard element in many web application stacks, has unveiled the first release candidate for version 8.0.

Features to be rolled out in MySQL 8.0 include:

  • First-class support for Unicode 9.0 out of the box.
  • Window functions and recursive SQL syntax, for queries that previously weren’t possible or would have been difficult to write.
  • Expanded support for native JSON data and document-store functionality.

With version 8.0, MySQL is jumping several versions in its numbering (from 5.5), due to 6.0 being nixed and 7.0 being reserved for the clustering version of MySQL.

To read this article in full or to leave a comment, please click here

Read more 0 Comments

Cython 0.27 speeds Python by moving away from oddball syntax

Cython, the toolkit that allows Python code to be converted to high-speed C code, has a new 0.27 release that can now use Python’s own native typing syntax to speed up the Python-to-C conversion process.

Previously, Cython users could accelerate Python only by decorating the code with type annotations in a dialect peculiar to Cython. Python has its own optional syntax for variable type annotation, but Cython didn’t use it.

With Cython 0.27, Cython can now recognize PEP 526-style type declarations for native Python types, such as str or list. The same syntax can also be used to explicitly define native C types, using declarations like declaration like var: cython.int = 32.

To read this article in full or to leave a comment, please click here

Read more 0 Comments

ONNX makes machine learning models portable, shareable

Read more 0 Comments

ONNX makes machine learning models portable, shareable

Read more 0 Comments

Mesosphere taps Kubernetes for container orchestration

Read more 0 Comments

Mesosphere DC/OS taps Kubernetes for container orchestration

Read more 0 Comments

Microsoft linker tool shrinks .Net applications

A long-requested and long-unfulfilled feature for .Net has finally been delivered by Microsoft and the Mono team: A linker that allows .Net applications to be stripped down to include only the parts of libraries that are actually used by the program at runtime.

The IL Linker project works by analyzing a .Net application and determining which libraries are never called by the application in question. “It is effectively an application-specific dead code analysis,” says Microsoft in its GitHub announcement for the project.

A long-term mission for IL Linker is to make it into “the primary linker for the .Net ecosystem.”

To read this article in full or to leave a comment, please click here

Read more 0 Comments

3 projects lighting a fire under machine learning

Mention machine learning, and many common frameworks pop into mind, from “old” stalwarts like Scikit-learn to juggernauts like Google’s TensorFlow. But the field is large and diverse, and useful innovations are bubbling up across the landscape.

Recent releases from three open source projects continue the march toward making machine learning faster, more scalable, and easier to use. PyTorch and Apache MXNet bring GPU support to machine learning and deep learning in Python. 

To read this article in full or to leave a comment, please click here

Read more 0 Comments

Pivotal, VMware team up to deploy Kubernetes on vSphere

Pivotal and VMware have teamed up to deliver commercial-grade Kubernetes distributions on both VMware vSphere and Google Cloud Platform (GCP).

Pivotal Container Service (PKS), launching in Q4 2017, runs Kubernetes atop VMware’s infrastructure management tools—vSphere, vSAN, and NSX. It also taps a project from Cloud Foundry, Kubo, originally created by Pivotal and Google, to deploy and manage Kubernetes on VMware’s stack.

To read this article in full or to leave a comment, please click here

Read more 0 Comments

Microsoft’s Project Brainwave accelerates deep learning in Azure

Earlier this year, Google unveiled its Tensor Processing Unit, custom hardware for speeding up prediction-making with machine learning models.

Now Microsoft is trying something similar, with its Project Brainwave hardware, which supports many major deep learning systems in wide use. Project Brainwave covers many of the same goals as Google’s TPU: Speed up how predictions are served from machine learning models (in Brainwave case, those hosted in Azure, using custom hardware deployed in Microsoft’s cloud at scale). 

To read this article in full or to leave a comment, please click here

Read more 0 Comments

13 frameworks for mastering machine learning

13 frameworks for mastering machine learning
13 frameworks for mastering machine learning

Image by W.Rebel via Wikimedia

Over the past year, machine learning has gone mainstream with a bang. The “sudden” arrival of machine learning isn’t fueled by cheap cloud environments and ever more powerful GPU hardware alone. It is also due to an explosion of open source frameworks designed to abstract away the hardest parts of machine learning and make its techniques available to a broad class of developers.

To read this article in full or to leave a comment, please click here

Read more 0 Comments

Docker Enterprise now runs Windows and Linux in one cluster

With the newest Docker Enterprise Edition, you can now have Docker clusters composed of nodes running different operating systems.

Three of the key OSes supported by Docker—Windows, Linux, and IBM System Z—can run applications side by side in the same cluster, all orchestrated by a common mechanism.

Clustering apps across multiple OSes in Docker requires that you build per-OS images for each app. But those apps, when running on both Windows and Linux, can be linked to run in concert via Docker’s overlay networking.

To read this article in full or to leave a comment, please click here

Read more 0 Comments

Amazon joins Kubernetes-focused CNCF industry group

The Cloud Native Computing Foundation, created to promote and develop technologies like Kubernetes and core components of the container ecosystem spawned by Docker, welcomed Amazon Web Services into its fold this week.

Amazon comes on board as a top-level (“platinum”) member. According to Amazon’s Adrian Cockcroft, now a member of the CNCF’s governing board, containers are the big reason Amazon’s getting involved—at least, initially.

Amazon already has a major investment in container tech. Its ECS service provides managed containers that run via machine images deployed on clusters of EC2 instances. Its older Elastic Beanstalk service can deploy and manage Docker containers, although they’re scaled and managed via Amazon’s own internal stack, not the CNCF’s Kubernetes. And users can always manually deploy Docker Enterprise Edition, a container-centric Linux such as CoreOS, or a Kubernetes cluster on EC2.

To read this article in full or to leave a comment, please click here

Read more 0 Comments

IBM speeds deep learning by using multiple servers

For everyone frustrated by how long it takes to train deep learning models, IBM has some good news: It has unveiled a way to automatically split deep-learning training jobs across multiple physical servers — not just individual GPUs, but whole systems with their own separate sets of GPUs.

Now the bad news: It’s available only in IBM’s PowerAI 4.0 software package, which runs exclusively on IBM’s own OpenPower hardware systems.

Distributed Deep Learning (DDL) doesn’t require developers to learn an entirely new deep learning framework. It repackages several common frameworks for machine learning: TensorFlow, Torch, Caffe, Chainer, and Theano. Deep learning projecs that use those frameworks can then run in parallel across multiple hardware nodes.

To read this article in full or to leave a comment, please click here

Read more 0 Comments

IBM speeds deep learning by using multiple servers

For everyone frustrated by how long it takes to train deep learning models, IBM has some good news: It has unveiled a way to automatically split deep-learning training jobs across multiple physical servers — not just individual GPUs, but whole systems with their own separate sets of GPUs.

Now the bad news: It’s available only in IBM’s PowerAI 4.0 software package, which runs exclusively on IBM’s own OpenPower hardware systems.

Distributed Deep Learning (DDL) doesn’t require developers to learn an entirely new deep learning framework. It repackages several common frameworks for machine learning: TensorFlow, Torch, Caffe, Chainer, and Theano. Deep learning projecs that use those frameworks can then run in parallel across multiple hardware nodes.

To read this article in full or to leave a comment, please click here

Read more 0 Comments

Apache Spark 2.2 gets streaming, R language boosts

With version 2.2 of Apache Spark, a long-awaited feature for the multipurpose in-memory data processing framework is now available for production use.

Structured Streaming, as that feature is called, allows Spark to process streams of data in ways that are native to Spark’s batch-based data-handling metaphors. It’s part of Spark’s long-term push to become, if not all things to all people in data science, then at least the best thing for most of them.

To read this article in full or to leave a comment, please click here

Read more 0 Comments

Nvidia’s new TensorRT speeds machine learning predictions

Nvidia has released a new version of TensorRT, a runtime system for serving inferences using deep learning models through Nvidia’s own GPUs.

Inferences, or predictions made from a trained model, can be served from either CPUs or GPUs. Serving inferences from GPUs is part of Nvidia’s strategy to get greater adoption of its processors, countering what AMD is doing to break Nvidia’s stranglehold on the machine learning GPU market.

Nvidia claims the GPU-based TensorRT is better across the board for inferencing than CPU-only approaches. One of Nvidia’s proffered benchmarks, the AlexNet image classification test under the Caffe framework, claims TensorRT to be 42 times faster than a CPU-only version of the same test — 16,041 images per second vs. 374—when run on Nvidia’s Tesla P40 processor. (Always take industry benchmarks with a grain of salt.)

To read this article in full or to leave a comment, please click here

Read more 0 Comments