TensorFlow: The open library for deep learning

TensorFlow: The open library for deep learning

After AlphaGo became the first IA to beat a professional player in the GO game and having also beaten the world’s champion in the last days, it seems as if the issue of “artificial neural networks” regained relevance.


And yes, the software used by Google for most artificial intelligence needs was released a few months ago under the Apache 2.0 license and is now available for download and use by anyone (students, researchers, hackers, engineers, developers, and many others).

Its name is TensorFlow, it is coded through a Python interface or C / C ++, and can run in various environments: In multi CPUs, in video plates, in cloud servers or on mobile devices with Android and IOS.

In this blog, the intention is not to enter into the code of it, but to rather make a theoretical introduction to such a deep topic that the expression deep learning will remain short to describe it. If what is sought is code, you can always consult TensorFlow’s quick guide on their official website.

Artificial learning?

Artificial learning! One of the most novel and unexplored areas of computer science. However, there are already some software products that can do this sort of thing: Caffe, Deeplearnig4j, OpenNN and Torch. In any case, if you still do not know where we stand, here is a list of concepts to help us come into the topic:

  • Artificial neural network: It is a computing paradigm that attempts to solve problems from a different approach. Usually, they are made up of many input nodes, one or more layers of intermediate nodes, and multiple output nodes. To keep things simple, let’s say that each input node can emit a value between 0 and 1, multiplied by a variable weight “w”. Finally, a threshold “b” is defined for the output nodes and if the sum of all entries exceeds that threshold, then that output would classify as positive defined.


  • Genetic algorithm: They are algorithms that are based on feedback from their results to make improvements in design. This part is key since in artificial neural networks there are variables that are at the mercy of the algorithm: the “w” weights.
  • Deep learning: The process by which computers learn to perform a task, given a set of things that are defined as true and refine their neural network, to then generalize and apply them to new situations. As it can be seen, the programmer is not the one who defines how the circuit is formed, but he or she provides the necessary mechanisms to evolve it to its most optimal way through many attempts.

Where I can find TensorFlow?

While Google has used deep learning almost since its beginning with technologies such as prediction API and DistBelief (TensorFlow’s previous generation) now this renowned software library is used in many popular applications (and where integration is so natural that we forget it exists).

Here are some examples:

  • User behavior: RankBrain is one of the ways that Google directs its search results since October 2015. It can learn which sites are relevant to the user depending on what links the user clicks on; but it does not do it algorithmically, instead it learns by adjusting its neural network.
  • Speech recognition and natural language processing: Voice-to-text conversion obtained from thousands of samples spoken with their transcripts.
  • Translator: the translator learns languages from hundreds of texts with official translation. Sometimes it uses the user’s recommendations to improve their work.


  • Predictive text: by using the self-correction mode or writing textual keyboards on Android devices. The words that are recommended depending on the user and vocabulary.
  • Image recognition: Try searching for “red car” in the search for images. How can there be so many red cars on the Internet? The truth is that most of the images that are presented are not called “red_car.jpg”, but TensorFlow is doing its job by recognizing “red” and “car” abstractions by looking at the pictures.


tensor4How does it all work?

TensorFlow represents information as a multidimensional array (not very surprisingly called tensor). In this arrangement, the data available overturns and usually there is a dimension reserved for the number of samples with which it will train. So, if we have for example 55,000 digits written by hand to recognize images of 28×28 pixels, then we fabricate an array of 55000x28x28.

The necessary neural layers are then deployed to solve the problem and operations to be performed are determined. All these operations can be viewed using TensorBoard, a tool that provides a visual interface. The more layers, the greater the need for processing; that is why each section of the graph can run on different processing units.
After the IA has been trained, you can get obtain very interesting displays such as the ones in following boxes that show the evidence for (blue) and against (red) that the line representing the digit shown:


And here is an example of how the source code would look like:

#Implement regression

import tensorflow as tf

x = tf.placeholder(tf.float32, [None, 784])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x, W) + b)

#Train the model

y_ = tf.placeholder(tf.float32, [None, 10])
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
init = tf.initialize_all_variables()
sess = tf.Session()
for i in range(1000): 
  batch_xs, batch_ys = mnist.train.next_batch(100) 
  sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys}) 

#Evaluating results
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))

More useful links:


Comments?  Contact us for more information. We’ll quickly get back to you with the information you need.