{"id":38638,"date":"2018-11-19T19:14:21","date_gmt":"2018-11-19T16:14:21","guid":{"rendered":"https:\/\/www.altoros.com\/blog\/?p=38638"},"modified":"2020-05-20T04:15:08","modified_gmt":"2020-05-20T01:15:08","slug":"improving-facial-recognition-with-super-fine-attributes-and-tensorflow","status":"publish","type":"post","link":"https:\/\/www.altoros.com\/blog\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\/","title":{"rendered":"Improving Facial Recognition with Super-Fine Attributes and TensorFlow"},"content":{"rendered":"<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_82_2 counter-hierarchy ez-toc-counter ez-toc-transparent ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\">\n<p class=\"ez-toc-title\" style=\"cursor:inherit\">Table of Contents<\/p>\n<span class=\"ez-toc-title-toggle\"><a href=\"#\" class=\"ez-toc-pull-right ez-toc-btn ez-toc-btn-xs ez-toc-btn-default ez-toc-toggle\" aria-label=\"Toggle Table of Content\"><span class=\"ez-toc-js-icon-con\"><span class=\"\"><span class=\"eztoc-hide\" style=\"display:none;\">Toggle<\/span><span class=\"ez-toc-icon-toggle-span\"><svg style=\"fill: #999;color:#999\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" class=\"list-377408\" width=\"20px\" height=\"20px\" viewBox=\"0 0 24 24\" fill=\"none\"><path d=\"M6 6H4v2h2V6zm14 0H8v2h12V6zM4 11h2v2H4v-2zm16 0H8v2h12v-2zM4 16h2v2H4v-2zm16 0H8v2h12v-2z\" fill=\"currentColor\"><\/path><\/svg><svg style=\"fill: #999;color:#999\" class=\"arrow-unsorted-368013\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"10px\" height=\"10px\" viewBox=\"0 0 24 24\" version=\"1.2\" baseProfile=\"tiny\"><path d=\"M18.2 9.3l-6.2-6.3-6.2 6.3c-.2.2-.3.4-.3.7s.1.5.3.7c.2.2.4.3.7.3h11c.3 0 .5-.1.7-.3.2-.2.3-.5.3-.7s-.1-.5-.3-.7zM5.8 14.7l6.2 6.3 6.2-6.3c.2-.2.3-.5.3-.7s-.1-.5-.3-.7c-.2-.2-.4-.3-.7-.3h-11c-.3 0-.5.1-.7.3-.2.2-.3.5-.3.7s.1.5.3.7z\"\/><\/svg><\/span><\/span><\/span><\/a><\/span><\/div>\n<nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/www.altoros.com\/blog\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\/#Identification_without_facial_recognition\" >Identification without facial recognition<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/www.altoros.com\/blog\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\/#Precise_image_labeling_with_super-fine_attributes\" >Precise image labeling with super-fine attributes<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/www.altoros.com\/blog\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\/#Accelerating_a_super-fine_model_with_TensorFlow\" >Accelerating a super-fine model with TensorFlow<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-4\" href=\"https:\/\/www.altoros.com\/blog\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\/#Who_can_use_this\" >Who can use this?<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-5\" href=\"https:\/\/www.altoros.com\/blog\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\/#Want_details_Watch_the_videos\" >Want details? Watch the videos!<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-6\" href=\"https:\/\/www.altoros.com\/blog\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\/#Further_reading\" >Further reading<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-7\" href=\"https:\/\/www.altoros.com\/blog\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\/#About_the_expert\" >About the expert<\/a><\/li><\/ul><\/nav><\/div>\n<h3><span class=\"ez-toc-section\" id=\"Identification_without_facial_recognition\"><\/span>Identification without facial recognition<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>On April 15, 2013, a <a href=\"https:\/\/en.wikipedia.org\/wiki\/Boston_Marathon_bombing\" rel=\"noopener noreferrer\" target=\"_blank\">tragedy<\/a> befell the annual Boston Marathon as two homemade pressure cooker bombs detonated near the finish line. There were several hundred casualties reported after the incident.<\/p>\n<p>Law enforcement surveillance teams scoured images and videos of the marathon to find suspects. Three days later, the Federal Bureau of Investigation (FBI) released images of two men who were later identified as the terrorists responsible for the bombing.<\/p>\n<p><center><a href=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/11\/Tensorflow-ML-AI-image-recognition-aura-vision-boston-marathon-bombing.png\"><img decoding=\"async\" src=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/11\/Tensorflow-ML-AI-image-recognition-aura-vision-boston-marathon-bombing-1024x576.png\" alt=\"\" width=\"640\" class=\"aligncenter size-large wp-image-38641\" \/><\/a><small>Faces aren&#8217;t always seen on camera (<a href=\"https:\/\/www.slideshare.net\/seldon_io\/tensorflow-london-18-dr-daniel-martinhocorbishley-from-science-to-startups-with-tensorflow-computer-vision-and-people\" rel=\"noopener noreferrer\" target=\"_blank\">Image credit<\/a>)<\/small><\/center><\/p>\n<div id=\"attachment_38662\" style=\"width: 150px\" class=\"wp-caption alignright\"><a href=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/11\/Daniel-Martinho-Corbishley-1.jpg\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-38662\" src=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/11\/Daniel-Martinho-Corbishley-1-150x150.jpg\" alt=\"\" width=\"150\" height=\"150\" class=\"size-thumbnail wp-image-38662\" \/><\/a><p id=\"caption-attachment-38662\" class=\"wp-caption-text\"><small>Daniel Martinho-Corbishley<\/small><\/p><\/div>\n<p>At a recent <a href=\"https:\/\/www.meetup.com\/TensorFlow-London\/events\/255229824\/\" rel=\"noopener noreferrer\" target=\"_blank\">TensorFlow meetup<\/a> in London, <a href=\"https:\/\/www.linkedin.com\/in\/danielmartinhocorbishley\" rel=\"noopener noreferrer\" target=\"_blank\">Dr. Daniel Martinho-Corbishley<\/a>, CEO at Aura Vision Labs, exemplified this investigation and explained why it may took some days to search. &#8220;Faces are really hard to spot in crowds,&#8221; he said. Additionally, not all images and video footage contain faces depending on the camera angle and position.<\/p>\n<blockquote><p><em>&#8220;Surveillance teams all over the world spend hundreds of thousands every year just crawling through video footage. This problem is about to get worse because Internet video surveillance is looking to grow seven times in the next three years. We need to find a way of identifying people without being able to see their faces.&#8221; \u2014Dr. Daniel Martinho-Corbishley<\/em><\/p><\/blockquote>\n<p>Though hard biometrics\u2014facial recognition, DNA detection, iris scanning, and fingerprint analysis\u2014can uniquely identify a person, they are very difficult to capture. So, are there alternative methods of recognizing an individual? At the meetup, Daniel introduced the attendees to the method of <em>super-fine attributes<\/em>, which relies on using soft biometrics rather than hard ones and can serve as a solution to the problem.<\/p>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Precise_image_labeling_with_super-fine_attributes\"><\/span>Precise image labeling with super-fine attributes<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p><a href=\"https:\/\/en.wikipedia.org\/wiki\/Soft_biometrics\" rel=\"noopener noreferrer\" target=\"_blank\">Soft biometrics<\/a> use multiple visual cues\u2014such as gender, age, height, weight, build, hair color and length, skin color, clothing, etc.\u2014to create a unique description that can identify a person. Unlike hard biometrics, soft ones can be seen from a distance regardless of image quality and a person&#8217;s angle or pose. This makes soft biometrics far better candidates for recognition goals.<\/p>\n<p><center><a href=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/11\/Tensorflow-ML-AI-image-recognition-aura-vision-soft-biometrics.png\"><img decoding=\"async\" src=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/11\/Tensorflow-ML-AI-image-recognition-aura-vision-soft-biometrics-1024x576.png\" alt=\"\" width=\"640\" class=\"aligncenter size-large wp-image-38647\" \/><\/a><small>Soft biometrics can be used on low-quality images (<a href=\"https:\/\/www.slideshare.net\/seldon_io\/tensorflow-london-18-dr-daniel-martinhocorbishley-from-science-to-startups-with-tensorflow-computer-vision-and-people\" rel=\"noopener noreferrer\" target=\"_blank\">Image credit<\/a>)<\/small><\/center><\/p>\n<blockquote><p><em>&#8220;Hard biometrics are as great as they&#8217;re very discriminative. You can have one that can uniquely identify someone most of the time, but they&#8217;re really hard to capture. You can&#8217;t capture them with a CCTV image. Soft biometrics are the opposite, you can see a lot of soft biometrics in blurry and grainy images.&#8221; \u2014Dr. Daniel Martinho-Corbishley, Aura Vision Labs<\/em><\/p><\/blockquote>\n<p>To label images properly with soft biometrics, Daniel introduced <a href=\"https:\/\/ieeexplore.ieee.org\/document\/8359307\" rel=\"noopener noreferrer\" target=\"_blank\">the concept of super-fine attributes<\/a>. According to the study, super-fine attributes simultaneously encapsulate multiple, integral concepts of a single trait as <em>multi-dimensional coordinates<\/em>. This enables more intricate image descriptions, which categorical or binary attributes cannot account for.<\/p>\n<p><center><a href=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/11\/Tensorflow-ML-AI-image-recognition-aura-vision-super-fine-attributes.png\"><img decoding=\"async\" src=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/11\/Tensorflow-ML-AI-image-recognition-aura-vision-super-fine-attributes-1024x576.png\" alt=\"\" width=\"640\" class=\"aligncenter size-large wp-image-38654\" \/><\/a><small>Super-fine attributes can categorize obscure images (<a href=\"https:\/\/www.slideshare.net\/seldon_io\/tensorflow-london-18-dr-daniel-martinhocorbishley-from-science-to-startups-with-tensorflow-computer-vision-and-people\" rel=\"noopener noreferrer\" target=\"_blank\">Image credit<\/a>)<\/small><\/center><\/p>\n<p>For instance, super-fine attributes can be used to label a blurry image that somewhat looks like a female as &#8220;obscure, but vaguely female.&#8221; These labels are linked to the coordinates in the super-fine space, so visually similar images are closer to one another.<\/p>\n<blockquote><p><em>&#8220;We want to move from the (categorical\/binary space) to a super-fine attribute space. It&#8217;s just projecting images as multi-dimensional coordinates in a continuous space, rather than having them as fixed binary or categorical labels. It&#8217;s a more objective way of comparing the similarities and differences of different types of images.&#8221; \u2014Dr. Daniel Martinho-Corbishley<\/em><\/p><\/blockquote>\n<p>To create a super-fine space, <em>crowd prototyping<\/em> can be used. With this approach, an image is crowdsourced to discover visual labels and their relationships. In the following example, pairwise similarities are crowdsourced between two images at a time. The differences in each image forms a high-dimensional distance matrix, which is then used to create an embedding strategy. A clustering algorithm can then be employed to identify labels.<\/p>\n<p><center><a href=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/11\/Tensorflow-ML-AI-image-recognition-aura-vision-crowd-prototyping.png\"><img decoding=\"async\" src=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/11\/Tensorflow-ML-AI-image-recognition-aura-vision-crowd-prototyping-1024x576.png\" alt=\"\" width=\"640\" class=\"aligncenter size-large wp-image-38664\" \/><\/a><small>A crowd prototyping methodology (<a href=\"https:\/\/www.slideshare.net\/seldon_io\/tensorflow-london-18-dr-daniel-martinhocorbishley-from-science-to-startups-with-tensorflow-computer-vision-and-people\" rel=\"noopener noreferrer\" target=\"_blank\">Image credit<\/a>)<\/small><\/center><\/p>\n<p>Once the super-fine space has been identified, new images are labeled into the space.<\/p>\n<p><center><a href=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/11\/Tensorflow-ML-AI-image-recognition-aura-vision-super-fine-attributes-new-images.png\"><img decoding=\"async\" src=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/11\/Tensorflow-ML-AI-image-recognition-aura-vision-super-fine-attributes-new-images-1024x576.png\" alt=\"\" width=\"640\" class=\"aligncenter size-large wp-image-38667\" \/><\/a><small>Matching new images (<a href=\"https:\/\/www.slideshare.net\/seldon_io\/tensorflow-london-18-dr-daniel-martinhocorbishley-from-science-to-startups-with-tensorflow-computer-vision-and-people\" rel=\"noopener noreferrer\" target=\"_blank\">Image credit<\/a>)<\/small><\/center><\/p>\n<blockquote><p><em>&#8220;We just match the new image to whichever prototype it&#8217;s most related to. This is a more objective way for humans to label new images.&#8221; \u2014Dr. Daniel Martinho-Corbishley<\/em><\/p><\/blockquote>\n<p>Yet, another scenario may imply super-fine age labels applied for image classification. This way, the division between ages is clearly depicted with &#8220;very young&#8221; and &#8220;very old&#8221; at opposite corners of the data set.<\/p>\n<p><center><a href=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/11\/Tensorflow-ML-AI-image-recognition-aura-vision-super-fine-age-labels.png\"><img decoding=\"async\" src=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/11\/Tensorflow-ML-AI-image-recognition-aura-vision-super-fine-age-labels-1024x576.png\" alt=\"\" width=\"640\" class=\"aligncenter size-large wp-image-38669\" \/><\/a><small>Super-fine age labels created from the pedestrian attribute (PETA) data set (<a href=\"https:\/\/www.slideshare.net\/seldon_io\/tensorflow-london-18-dr-daniel-martinhocorbishley-from-science-to-startups-with-tensorflow-computer-vision-and-people\" rel=\"noopener noreferrer\" target=\"_blank\">Image credit<\/a>)<\/small><\/center><\/p>\n<blockquote><p><em>&#8220;Age is really interesting. We get very young at the top left, and then we get this gradient of age through the space as they get older and older. This is exactly what we&#8217;d expect, because very young and very old are visually the most distinct classes, so you can see that there&#8217;s a large distance between them. As they become obscure, they cluster in the middle, because you can&#8217;t tell anything about the age.&#8221; \u2014Dr. Daniel Martinho-Corbishley, Aura Vision Labs<\/em><\/p><\/blockquote>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Accelerating_a_super-fine_model_with_TensorFlow\"><\/span>Accelerating a super-fine model with TensorFlow<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>During the presentation, Daniel shared a few TensorFlow techniques, which, according to him, &#8220;helped to iterate solutions faster.&#8221; In particular, his team made use of the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">tf.train.slice_input_producer<\/code> function to shuffle and slice tensors, read images, and batch inputs.<\/p>\n<p><center><a href=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/11\/Tensorflow-ML-AI-image-recognition-aura-vision-dataset-loading.png\"><img decoding=\"async\" src=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/11\/Tensorflow-ML-AI-image-recognition-aura-vision-dataset-loading-1024x576.png\" alt=\"\" width=\"640\" class=\"aligncenter size-large wp-image-38682\" \/><\/a><small>Data set loading with TensorFlow (<a href=\"https:\/\/www.slideshare.net\/seldon_io\/tensorflow-london-18-dr-daniel-martinhocorbishley-from-science-to-startups-with-tensorflow-computer-vision-and-people\" rel=\"noopener noreferrer\" target=\"_blank\">Image credit<\/a>)<\/small><\/center><\/p>\n<blockquote><p><em>&#8220;TensorFlow is very flexible, you can add another tensor if you want to have some metadata that&#8217;s working alongside your training data and it&#8217;s not so much to fiddle with.&#8221;<br \/>\n\u2014Dr. Daniel Martinho-Corbishley, Aura Vision Labs<\/em><\/p><\/blockquote>\n<p>To increase the variance and improve the robustness of the model, the <code style=\"color: #222222; background-color: #e6e6e6; padding: 1px 2px;\">read_image_path<\/code> function may be created. It allows for augmenting images in various ways, such as flip, color, rotate, and stretch.<\/p>\n<p><center><a href=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/11\/Tensorflow-ML-AI-image-recognition-aura-vision-image-augmentation-pipeline.png\"><img decoding=\"async\" src=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/11\/Tensorflow-ML-AI-image-recognition-aura-vision-image-augmentation-pipeline-1024x576.png\" alt=\"\" width=\"640\" class=\"aligncenter size-large wp-image-38685\" \/><\/a><small>Image augmentation with TensorFlow (<a href=\"https:\/\/www.slideshare.net\/seldon_io\/tensorflow-london-18-dr-daniel-martinhocorbishley-from-science-to-startups-with-tensorflow-computer-vision-and-people\" rel=\"noopener noreferrer\" target=\"_blank\">Image credit<\/a>)<\/small><\/center><\/p>\n<p>TensorFlow also enables the training of multiple models on different data sets.<\/p>\n<p><center><a href=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/11\/Tensorflow-ML-AI-image-recognition-aura-vision-loading-multiple-checkpoints.png\"><img decoding=\"async\" src=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/11\/Tensorflow-ML-AI-image-recognition-aura-vision-loading-multiple-checkpoints-1024x576.png\" alt=\"\" width=\"640\" class=\"aligncenter size-large wp-image-38688\" \/><\/a><small>Training multiple models with TensorFlow (<a href=\"https:\/\/www.slideshare.net\/seldon_io\/tensorflow-london-18-dr-daniel-martinhocorbishley-from-science-to-startups-with-tensorflow-computer-vision-and-people\" rel=\"noopener noreferrer\" target=\"_blank\">Image credit<\/a>)<\/small><\/center><\/p>\n<blockquote><p><em>&#8220;You&#8217;ve got variable scoping, so you simply initialize the network. You have different variable scopes for different stages of the model. We then train those models separately. When we want to load in those check points, we just create a saver.&#8221; \u2014Dr. Daniel Martinho-Corbishley<\/em><\/p><\/blockquote>\n<p>To get a comparison between image classification types, Daniel and his team trained a <a href=\"https:\/\/www.kaggle.com\/pytorch\/resnet152\" rel=\"noopener noreferrer\" target=\"_blank\">ResNet-152 model<\/a> with binary classification and super-fine regression.<\/p>\n<p><center><a href=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/11\/Tensorflow-ML-AI-image-recognition-aura-vision-super-fine-vs-conventional.png\"><img decoding=\"async\" src=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/11\/Tensorflow-ML-AI-image-recognition-aura-vision-super-fine-vs-conventional-1024x576.png\" alt=\"\" width=\"640\" class=\"aligncenter size-large wp-image-38676\" \/><\/a><small>Super-fine and binary model comparison (<a href=\"https:\/\/www.slideshare.net\/seldon_io\/tensorflow-london-18-dr-daniel-martinhocorbishley-from-science-to-startups-with-tensorflow-computer-vision-and-people\" rel=\"noopener noreferrer\" target=\"_blank\">Image credit<\/a>)<\/small><\/center><\/p>\n<blockquote><p><em>&#8220;For just gender and age attributes, we see super-fine outperform binary by 8.25% on the area under curve. When we have three super-fine attributes, and we compare that to 35 binary attributes, we see that super-fine still outperforms by 4%. That&#8217;s because the labels are much more discriminative, more objective, so the labels are much more relevant for those images. The machine is able to learn a better representation for each image.&#8221;<br \/>\n\u2014Dr. Daniel Martinho-Corbishley, Aura Vision Labs<\/em><\/p><\/blockquote>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Who_can_use_this\"><\/span>Who can use this?<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>So, what can the super-fine model offer to the real world? According to Daniel, it works well in the retail analytics space. Through the super-fine model, retailers can get a better grasp of market demographics.<\/p>\n<blockquote><p><em>&#8220;If a store is running a marketing campaign to target females aged 25\u201334, they need to know if it is actually working. Before they would have a footfall counter say a few more people walked into the store, but did it actually work? Now, you can actually tell them how many of that demographic they were targeting came to their store.&#8221;<br \/>\n\u2014Dr. Daniel Martinho-Corbishley, Aura Vision Labs<\/em><\/p><\/blockquote>\n<p><center><a href=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/11\/Tensorflow-ML-AI-image-recognition-aura-vision-labs-solution.png\"><img decoding=\"async\" src=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/11\/Tensorflow-ML-AI-image-recognition-aura-vision-labs-solution-1024x576.png\" alt=\"\" width=\"640\" class=\"aligncenter size-large wp-image-38693\" \/><\/a><small>Aura Vision Lab&#8217;s super-fine recognition (<a href=\"https:\/\/www.slideshare.net\/seldon_io\/tensorflow-london-18-dr-daniel-martinhocorbishley-from-science-to-startups-with-tensorflow-computer-vision-and-people\" rel=\"noopener noreferrer\" target=\"_blank\">Image credit<\/a>)<\/small><\/center><\/p>\n<p>Though the scenario Daniel exemplified was for commercial retail purposes, the flexibility in the super-fine model can be adopted to fit other requirements. Organizations that rely on hard biometrics can fallback on soft biometrics and super-fine attributes to create recognition software that is just as reliable, if not more so.<\/p>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Want_details_Watch_the_videos\"><\/span>Want details? Watch the videos!<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<table width=\"100%\">\n<tbody>\n<tr>\n<td>\n<div style=\"float:right; width:50%; padding-left:15px; font-size:14px;\">\n<strong>Table of contents<\/strong><\/p>\n<ol style=\"padding-top:10px;\">\n<li style=\"margin-bottom: 10px;\">Why is facial recognition not always available?  (0:54)<\/li>\n<li style=\"margin-bottom: 10px;\">What is soft biometrics? (2:20)<\/li>\n<li style=\"margin-bottom: 10px;\">What are super-fine attributes? (4:50)<\/li>\n<li style=\"margin-bottom: 10px;\">How does crowd prototyping work? (6:33)<\/li>\n<li style=\"margin-bottom: 10px;\">What does a super-fine data set look like? (9:03)<\/li>\n<li style=\"margin-bottom: 10px;\">How does super-fine regression compare to binary classification? (10:05)<\/li>\n<li style=\"margin-bottom: 10px;\">How was TensorFlow used to speed up machine learning? (11:28)<\/li>\n<li style=\"margin-bottom: 10px;\">Identifying super-fine attributes in real-time (14:15)<\/li>\n<li style=\"margin-bottom: 10px;\">Super-fine recognition in retail analytics (17:16)<\/li>\n<li style=\"margin-bottom: 10px;\">Questions and answers (20:30)<\/li>\n<\/ol>\n<\/div>\n<div class=\"video-container\"><iframe loading=\"lazy\" title=\"TensorFlow London: &#039;From science to startups with Tensorflow, Computer Vision and people&#039;\" width=\"1200\" height=\"675\" src=\"https:\/\/www.youtube.com\/embed\/E8KqRqf7uMI?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/div>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><small>Below are Daniel&#8217;s slides during the meetup.<small><\/p>\n<p><center><iframe loading=\"lazy\" src=\"\/\/www.slideshare.net\/slideshow\/embed_code\/key\/hPtIwlX4itEJs0\" width=\"595\" height=\"485\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" style=\"border:1px solid #CCC; border-width:1px; margin-bottom:5px; max-width: 100%;\" allowfullscreen> <\/iframe><\/center><\/p>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Further_reading\"><\/span>Further reading<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<ul>\n<li><a href=\"https:\/\/www.altoros.com\/blog\/using-machine-learning-and-tensorflow-to-recognize-traffic-signs\/\">Using Machine Learning and TensorFlow to Recognize Traffic Signs<\/a><\/li>\n<li><a href=\"https:\/\/www.altoros.com\/blog\/analyzing-satellite-imagery-with-tensorflow-to-automate-insurance-underwriting\/\">Analyzing Satellite Imagery with TensorFlow to Automate Insurance Underwriting<\/a><\/li>\n<li><a href=\"https:\/\/www.altoros.com\/blog\/using-long-short-term-memory-networks-and-tensorflow-for-image-captioning\/\">Using Long Short-Term Memory Networks and TensorFlow for Image Captioning<\/a><\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<h3><span class=\"ez-toc-section\" id=\"About_the_expert\"><\/span>About the expert<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<div>\n<div style=\"float: right;\"><a href=\"https:\/\/www.linkedin.com\/in\/danielmartinhocorbishley\/\"><img decoding=\"async\" src=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/11\/Daniel-Martinho-Corbishley-aura-vision-labs-bio.png\" alt=\"\" width=\"120\" class=\"aligncenter size-full wp-image-38698\" \/><\/a><\/div>\n<div style=\"width: 600px;\"><small><a href=\"https:\/\/www.linkedin.com\/in\/danielmartinhocorbishley\/\">Daniel Martinho-Corbishley<\/a> is a co-founder and CEO of Aura Vision Labs, which is behind a video AI platform, specializing in measuring and improving retail shopping experiences. His research involves robust estimation of pedestrian demographics from CCTV imagery using the latest techniques in computer vision and psychological crowdsourcing. Daniel\u2019s research is published in the leading applied machine learning journal (IEEE TPAMI), and Aura Vision was featured on BBC Click in May 2018. He completed his PhD in Computer Science and Biometric Identification from the University of Southampton.<\/small><\/div>\n<\/div>\n<hr \/>\n<p><center><small>This post was written by <a href=\"https:\/\/www.altoros.com\/blog\/author\/carlo\/\">Carlo Gutierrez<\/a> and edited by <a href=\"https:\/\/www.altoros.com\/blog\/author\/sophie.turol\/\">Sophia Turol<\/a> and <a href=\"https:\/\/www.altoros.com\/blog\/author\/alex\/\">Alex Khizhniak<\/a>.<\/small><\/center><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Identification without facial recognition<\/p>\n<p>On April 15, 2013, a tragedy befell the annual Boston Marathon as two homemade pressure cooker bombs detonated near the finish line. There were several hundred casualties reported after the incident.<\/p>\n<p>Law enforcement surveillance teams scoured images and videos of the marathon to find suspects. Three days later, [&#8230;]<\/p>\n","protected":false},"author":32,"featured_media":38905,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"content-type":"","footnotes":"","_links_to":"","_links_to_target":""},"categories":[214],"tags":[748,884,749],"class_list":["post-38638","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-tutorials","tag-machine-learning","tag-retail","tag-tensorflow"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Improving Facial Recognition with Super-Fine Attributes and TensorFlow | Altoros<\/title>\n<meta name=\"description\" content=\"Super-fine attributes (or multiple visual cues as age, height, etc.) seen in an image can be used to classify and create labels when hard biometrics aren&#039;t available.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.altoros.com\/blog\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Improving Facial Recognition with Super-Fine Attributes and TensorFlow | Altoros\" \/>\n<meta property=\"og:description\" content=\"Identification without facial recognition On April 15, 2013, a tragedy befell the annual Boston Marathon as two homemade pressure cooker bombs detonated near the finish line. There were several hundred casualties reported after the incident. Law enforcement surveillance teams scoured images and videos of the marathon to find suspects. Three days later, [...]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.altoros.com\/blog\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\/\" \/>\n<meta property=\"og:site_name\" content=\"Altoros\" \/>\n<meta property=\"article:published_time\" content=\"2018-11-19T16:14:21+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2020-05-20T01:15:08+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/11\/Tensorflow-ML-AI-image-recognition-aura-vision-labs-v3.gif\" \/>\n\t<meta property=\"og:image:width\" content=\"640\" \/>\n\t<meta property=\"og:image:height\" content=\"360\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/gif\" \/>\n<meta name=\"author\" content=\"Carlo Gutierrez\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Carlo Gutierrez\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\\\/\"},\"author\":{\"name\":\"Carlo Gutierrez\",\"@id\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/#\\\/schema\\\/person\\\/833e109f77de753b2b472dca0236b442\"},\"headline\":\"Improving Facial Recognition with Super-Fine Attributes and TensorFlow\",\"datePublished\":\"2018-11-19T16:14:21+00:00\",\"dateModified\":\"2020-05-20T01:15:08+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\\\/\"},\"wordCount\":1501,\"commentCount\":0,\"image\":{\"@id\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/wp-content\\\/uploads\\\/2018\\\/11\\\/Tensorflow-ML-AI-image-recognition-aura-vision-labs-v3.gif\",\"keywords\":[\"Machine Learning\",\"Retail\",\"TensorFlow\"],\"articleSection\":[\"Tutorials\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/www.altoros.com\\\/blog\\\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\\\/\",\"url\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\\\/\",\"name\":\"Improving Facial Recognition with Super-Fine Attributes and TensorFlow | Altoros\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/wp-content\\\/uploads\\\/2018\\\/11\\\/Tensorflow-ML-AI-image-recognition-aura-vision-labs-v3.gif\",\"datePublished\":\"2018-11-19T16:14:21+00:00\",\"dateModified\":\"2020-05-20T01:15:08+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/#\\\/schema\\\/person\\\/833e109f77de753b2b472dca0236b442\"},\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.altoros.com\\\/blog\\\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\\\/#primaryimage\",\"url\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/wp-content\\\/uploads\\\/2018\\\/11\\\/Tensorflow-ML-AI-image-recognition-aura-vision-labs-v3.gif\",\"contentUrl\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/wp-content\\\/uploads\\\/2018\\\/11\\\/Tensorflow-ML-AI-image-recognition-aura-vision-labs-v3.gif\",\"width\":640,\"height\":360},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Improving Facial Recognition with Super-Fine Attributes and TensorFlow\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/#website\",\"url\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/\",\"name\":\"Altoros\",\"description\":\"Insight\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/#\\\/schema\\\/person\\\/833e109f77de753b2b472dca0236b442\",\"name\":\"Carlo Gutierrez\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/wp-content\\\/uploads\\\/2021\\\/02\\\/CG_portrait-2-96x96.jpg\",\"url\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/wp-content\\\/uploads\\\/2021\\\/02\\\/CG_portrait-2-96x96.jpg\",\"contentUrl\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/wp-content\\\/uploads\\\/2021\\\/02\\\/CG_portrait-2-96x96.jpg\",\"caption\":\"Carlo Gutierrez\"},\"description\":\"Carlo Gutierrez is a Technical Writer at Altoros. As part of the editorial team, his focus has been on emerging technologies such as Cloud Foundry, Kubernetes, blockchain, and the Internet of Things. Prior to Altoros, he primarily wrote about enterprise and consumer technology. Carlo has over 12 years of experience in the publishing industry. Previously, he served as an Editor for PC World Philippines and Questex Asia, as well as a Designer for Tropa Entertainment.\",\"url\":\"https:\\\/\\\/www.altoros.com\\\/blog\\\/author\\\/carlo\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Improving Facial Recognition with Super-Fine Attributes and TensorFlow | Altoros","description":"Super-fine attributes (or multiple visual cues as age, height, etc.) seen in an image can be used to classify and create labels when hard biometrics aren't available.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.altoros.com\/blog\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\/","og_locale":"en_US","og_type":"article","og_title":"Improving Facial Recognition with Super-Fine Attributes and TensorFlow | Altoros","og_description":"Identification without facial recognition On April 15, 2013, a tragedy befell the annual Boston Marathon as two homemade pressure cooker bombs detonated near the finish line. There were several hundred casualties reported after the incident. Law enforcement surveillance teams scoured images and videos of the marathon to find suspects. Three days later, [...]","og_url":"https:\/\/www.altoros.com\/blog\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\/","og_site_name":"Altoros","article_published_time":"2018-11-19T16:14:21+00:00","article_modified_time":"2020-05-20T01:15:08+00:00","og_image":[{"width":640,"height":360,"url":"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/11\/Tensorflow-ML-AI-image-recognition-aura-vision-labs-v3.gif","type":"image\/gif"}],"author":"Carlo Gutierrez","twitter_misc":{"Written by":"Carlo Gutierrez","Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.altoros.com\/blog\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\/#article","isPartOf":{"@id":"https:\/\/www.altoros.com\/blog\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\/"},"author":{"name":"Carlo Gutierrez","@id":"https:\/\/www.altoros.com\/blog\/#\/schema\/person\/833e109f77de753b2b472dca0236b442"},"headline":"Improving Facial Recognition with Super-Fine Attributes and TensorFlow","datePublished":"2018-11-19T16:14:21+00:00","dateModified":"2020-05-20T01:15:08+00:00","mainEntityOfPage":{"@id":"https:\/\/www.altoros.com\/blog\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\/"},"wordCount":1501,"commentCount":0,"image":{"@id":"https:\/\/www.altoros.com\/blog\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\/#primaryimage"},"thumbnailUrl":"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/11\/Tensorflow-ML-AI-image-recognition-aura-vision-labs-v3.gif","keywords":["Machine Learning","Retail","TensorFlow"],"articleSection":["Tutorials"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.altoros.com\/blog\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.altoros.com\/blog\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\/","url":"https:\/\/www.altoros.com\/blog\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\/","name":"Improving Facial Recognition with Super-Fine Attributes and TensorFlow | Altoros","isPartOf":{"@id":"https:\/\/www.altoros.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.altoros.com\/blog\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\/#primaryimage"},"image":{"@id":"https:\/\/www.altoros.com\/blog\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\/#primaryimage"},"thumbnailUrl":"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/11\/Tensorflow-ML-AI-image-recognition-aura-vision-labs-v3.gif","datePublished":"2018-11-19T16:14:21+00:00","dateModified":"2020-05-20T01:15:08+00:00","author":{"@id":"https:\/\/www.altoros.com\/blog\/#\/schema\/person\/833e109f77de753b2b472dca0236b442"},"breadcrumb":{"@id":"https:\/\/www.altoros.com\/blog\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.altoros.com\/blog\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.altoros.com\/blog\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\/#primaryimage","url":"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/11\/Tensorflow-ML-AI-image-recognition-aura-vision-labs-v3.gif","contentUrl":"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2018\/11\/Tensorflow-ML-AI-image-recognition-aura-vision-labs-v3.gif","width":640,"height":360},{"@type":"BreadcrumbList","@id":"https:\/\/www.altoros.com\/blog\/improving-facial-recognition-with-super-fine-attributes-and-tensorflow\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.altoros.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Improving Facial Recognition with Super-Fine Attributes and TensorFlow"}]},{"@type":"WebSite","@id":"https:\/\/www.altoros.com\/blog\/#website","url":"https:\/\/www.altoros.com\/blog\/","name":"Altoros","description":"Insight","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.altoros.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.altoros.com\/blog\/#\/schema\/person\/833e109f77de753b2b472dca0236b442","name":"Carlo Gutierrez","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2021\/02\/CG_portrait-2-96x96.jpg","url":"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2021\/02\/CG_portrait-2-96x96.jpg","contentUrl":"https:\/\/www.altoros.com\/blog\/wp-content\/uploads\/2021\/02\/CG_portrait-2-96x96.jpg","caption":"Carlo Gutierrez"},"description":"Carlo Gutierrez is a Technical Writer at Altoros. As part of the editorial team, his focus has been on emerging technologies such as Cloud Foundry, Kubernetes, blockchain, and the Internet of Things. Prior to Altoros, he primarily wrote about enterprise and consumer technology. Carlo has over 12 years of experience in the publishing industry. Previously, he served as an Editor for PC World Philippines and Questex Asia, as well as a Designer for Tropa Entertainment.","url":"https:\/\/www.altoros.com\/blog\/author\/carlo\/"}]}},"_links":{"self":[{"href":"https:\/\/www.altoros.com\/blog\/wp-json\/wp\/v2\/posts\/38638","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.altoros.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.altoros.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.altoros.com\/blog\/wp-json\/wp\/v2\/users\/32"}],"replies":[{"embeddable":true,"href":"https:\/\/www.altoros.com\/blog\/wp-json\/wp\/v2\/comments?post=38638"}],"version-history":[{"count":102,"href":"https:\/\/www.altoros.com\/blog\/wp-json\/wp\/v2\/posts\/38638\/revisions"}],"predecessor-version":[{"id":54495,"href":"https:\/\/www.altoros.com\/blog\/wp-json\/wp\/v2\/posts\/38638\/revisions\/54495"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.altoros.com\/blog\/wp-json\/wp\/v2\/media\/38905"}],"wp:attachment":[{"href":"https:\/\/www.altoros.com\/blog\/wp-json\/wp\/v2\/media?parent=38638"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.altoros.com\/blog\/wp-json\/wp\/v2\/categories?post=38638"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.altoros.com\/blog\/wp-json\/wp\/v2\/tags?post=38638"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}