decor
 

Planet Big Data logo

Planet Big Data is an aggregator of blogs about big data, Hadoop, and related topics. We include posts by bloggers worldwide. Email us to have your blog included.

 

May 20, 2018


Curt Monash

Some stuff that’s always on my mind

I have a LOT of partially-written blog posts, but am struggling to get any of them finished (obviously). Much of the problem is that they have so many dependencies on each other. Clearly, then, I...

...
 

May 18, 2018


Revolution Analytics

Because it's Friday: Laurel or Yanny

I can only assume you've heard about this already: it's gone wildly viral in the USA, and I assume elsewhere in the world. But it's a lovely example of an auditory illusion, and as regular readers...

...
 

May 17, 2018


Revolution Analytics

Let me tell you what you missed at BUILD

If you weren't able to attend last weeks' BUILD conference in Seattle, you can always catch up on the keynotes and the session talks online, or read this recap by Charlotte Yarkoni. Or, if you have...

...
 

May 16, 2018


Datameer

GDPR – There is a Silver Lining

It feels like 1999 all over again, when Y2K was looming and IT pros were pulling consecutive all-nighters to get ready for it. GDPR, which is a set of rules under which the EU strengthens...

...
 

May 15, 2018


Revolution Analytics

Mind Bytes: Solving Societal Challenges with Artificial Intelligence

By Francesca Lazzeri (@frlazzeri), Data Scientist at Microsoft Artificial​ ​intelligence​ ​(AI)​ solutions​ ​are playing a growing role in our everyday life, ​​and​ ​are​ ​being adopted​ ​broadly, in...

...
 

May 12, 2018

Jean Francois Puget

Implementing libFM in Keras

 

image

I just won a gold medal on Talking Data competition on Kaggle, finishing 6th.  My approach and solution is described here. The part that triggered most interest from readers is where I used matrix factorization techniques to generate additional features. 

I'll explain it here.  Before that, let me briefly explain what this competition was about.  Here is how the problem is described on Kaggle site:


you’re challenged to build an algorithm that predicts whether a user will download an app after clicking a mobile app ad. To support your modeling, they have provided a generous dataset covering approximately 200 million clicks over 4 days!

Each row of the training data contains a click record, with the following features.

  • ip: ip address of click.

  • app: app id for marketing.

  • device: device type id of user mobile phone (e.g., iphone 6 plus, iphone 7, huawei mate 7, etc.)

  • os: os version id of user mobile phone

  • channel: channel id of mobile ad publisher

  • click_time: timestamp of click (UTC)

  • attributed_time: if user download the app for after clicking an ad, this is the time of the app download

  • is_attributed: the target that is to be predicted, indicating the app was downloaded

In that dataset, a combination of ip, device and os represents a user in most cases.  In some cases ip are shared among many users, which adds some complexity.  We'll ignore that for the sake of simplicity.  We can also ignore channel here, it appeared to not being very useful.  We are therefore given about 200 millions rows, each row containing a user, an app, and a click time.  Goal is to predict when users download the app. 

This is reminiscent of recommender systems where the goal is to predict whether users will buy a product (app is the product here). 

The most popular technique for building recommender systems is matrix factorization.  Goal is to compute profiles for users and products so that profiles for users that have a similar buying pattern have a similar profile, and so that products sold to similar users are similar.  One surprisingly simple way to do it is to assign a vector of k numerical values to each user and each product.  The inner product (dot product) of a user vector and a product vector yields the affinity of the two.

The matrix factorization approach is actually quite simple in principle.  You start from a matrix (no kidding... ), and find ways to approximate it.  For recommender systems, the matrix captures past user x products interactions (sales for instance).  The matrix has users as row, and products as columns (rows and columns can be swapped, result is the same).  This gives you a MxN matrix C with M users and N products. The matrix element C(i,j) has the number of times user i bought product j so far.  Most elements of C have value 0. For better accuracy I often replace counts by their log after adding 1 (numpy.log1p() function) as it maps 0 to 0, and makes the count distribution less skewed towards large counts.
These vectors are computed via an approximate factorization of the MxN matrix C.  For this we are looking for two matrices, a Mxk user matrix U, and a kxN product matrix P such that UP is as close as possible to C.  The rows of U are the user vectors, and the columns of P are the product vectors. One way to obtain U and P from C is to use the truncated singular value decomposition (tSVD).   I used the scikit-learn implementation of tSVD in the Talking Data competition.  Other ways are possible, for instance ALS.

Matrix factorizations are great, but they are limited to two factors.  What if we want to use the relationship of 3 or more factors?  For instance, what if we want to use the interactions between app, os, and device? Fortunately for us, a great generalization of matrix factorization has been introduced in this seminal paper:

Steffen Rendle (2010): Factorization Machines, in Proceedings of the 10th IEEE International Conference on Data Mining (ICDM 2010), Sydney, Australia PDF

Moreover, Rendle implmented his algorithm in the very successful libFM.  I could have use libFM directly, but it requires a different data format than what I was using (pandas or numpy).  Also it is a batch command line tool, not integrated with the Python environment I am familiar with.  I therefore decided to implement the libFM approach as a deep learning model, using Keras.  I shared on github the code for it, and how I used in that competition.

Let me focus here on the libfFM code.  This code is rather short:

k_latent = 2
embedding_reg = 0.0002
kernel_reg = 0.1

def get_embed(x_input, x_size, k_latent):
    if x_size > 0: #category
        embed = Embedding(x_size, k_latent, input_length=1, 
                          embeddings_regularizer=l2(embedding_reg))(x_input)
        embed = Flatten()(embed)
    else:
        embed = Dense(k_latent, kernel_regularizer=l2(embedding_reg))(x_input)
    return embed

def build_model_1(X, f_size):
    dim_input = len(f_size)
    
    input_x = [Input(shape=(1,)) for i in range(dim_input)] 
     
    biases = [get_embed(x, size, 1) for (x, size) in zip(input_x, f_size)]
    
    factors = [get_embed(x, size, k_latent) for (x, size) in zip(input_x, f_size)]
    
    s = Add()(factors)
    
    diffs = [Subtract()([s, x]) for x in factors]
    
    dots = [Dot(axes=1)([d, x]) for d,x in zip(diffs, factors)]

    x = Concatenate()(biases + dots)
    x = BatchNormalization()(x)
    output = Dense(1, activation='relu', kernel_regularizer=l2(kernel_reg))(x)
    model = Model(inputs=input_x, outputs=[output])
    opt = Adam(clipnorm=0.5)
    model.compile(optimizer=opt, loss='mean_squared_error')
    output_f = factors + biases
    model_features = Model(inputs=input_x, outputs=output_f)
    return model, model_features

Let's look at it bits per bits.  We start with the input, as usual.

    input_x = [Input(shape=(1,)) for i in range(dim_input)] 

The first layer is to create latent vectors.  Here we create two latent vectors per category, a factor vector, and a bias vector.   This is done in these lines:

    biases = [get_embed(x, size, 1) for (x, size) in zip(input_x, f_size)]
    
    factors = [get_embed(x, size, k_latent) for (x, size) in zip(input_x, f_size)]

The factor vectors will be multiplied two by two.  We could do it by looping over all combinations, but this would result in N(N -1)/2 dot products if we have N categories.  This can lead to memory issues as the number of categories increases.  Rendle devised a clever way to compute these products linearly in N.  I am using a slightly different way that I find simpler.  Let me try to explain it.  We want to compute :

F = sum xi xj for all i, j such that i < j

where xi is the factor vector for i-th category.  If we set

S = sum xi

then

F = sum xj (S - xj) for all j

We only have N products this way!  We actually get each product twice, but this is not an issue as coefficients in vectors will be scaled down to compensate for it. We can implement the set of products xj (S - xj)  directly:

    s = Add()(factors)
    
    diffs = [Subtract()([s, x]) for x in factors]
    
    dots = [Dot(axes=1)([d, x]) for d,x in zip(diffs, factors)]

Let's concatenate them with the biases.

    x = Concatenate()(biases + dots)

We are almost done.  If we implement the original libFM then we should take the sum of x.  Given we use a flexible deep learning framework we can use a mode complex last layer.  And we can add some batch normalization to get even better results:

    x = BatchNormalization()(x)
    output = Dense(1, activation='relu', kernel_regularizer=l2(kernel_reg))(x)

Our model is complete, we can finalize it and compile it.  We clip norms of the optimizer to avoid exploding gradients, but this may not be necessary in other applications.

    model = Model(inputs=input_x, outputs=[output])
    opt = Adam(clipnorm=0.5)
    model.compile(optimizer=opt, loss='mean_squared_error')

Given the purpose of this model is to compute the latent vectors, we need to retrieve them once they are computed by training the model.  For this we create a secondary model that shares the embedding layer with the previous model.

    output_f = factors + biases
    model_features = Model(inputs=input_x, outputs=output_f)

We can then train the main model.  I use a fairly large batch size of 2¹⁷ .  I advise users to tune the batch size to their data and not rely on this very large one by default.

n_epochs = 100
P = 17
batch_size = 2**P
earlystopper = EarlyStopping(patience=0, verbose=1)

model.fit(X_train,  y_train, 
          epochs=n_epochs, batch_size=batch_size, verbose=1, shuffle=True, 
          validation_data=(X_train, y_train), 
          sample_weight=w_train,
          callbacks=[earlystopper],
         )

Once the main model model is trained, we simply have to make predictions with the model_features to get the embeddings. 

X_pred = model_features.predict(X_train, batch_size=batch_size)

This is one beautiful trick if you think of it: we shared layers between models, and we don't directly use the model we trained for predictions!

I used the above to compute feature for my model, i.e. as an unsupervised learning model.  One could however use it in a supervised learning way by using the target of the problem directly, instead of interaction counts.

All the code above plus its use with Talking Data competition is available on github.

 

 

May 11, 2018


Revolution Analytics

Because it's Friday: Planes, pains, and automobiles

It's conference season, so I've been travelling on a lot of planes recently. (And today, I'm heading to Budapest for eRum 2018.) So this resonates with me right now: There's another video in the same...

...

Revolution Analytics

Custom R charts coming to Excel

This week at the BUILD conference, Microsoft announced that Power BI custom visuals will soon be available as charts with Excel. You'll be able to choose a range of data within an Excel workbook, and...

...
 

May 10, 2018


BrightPlanet

Keeping up with the constantly changing Deep Web, BrightPlanet has developed the solutions that work

Website structures are constantly changing. You might be surprised how often websites swap formatting, themes, or its entire layout. These changes will typically break custom harvest scripts from inexpensive or roll-your-own harvesting solutions. BrightPlanet’s harvest engine and quality assurance solution is designed to be robust and fault tolerant to these types of website changes. Our […] The post Keeping up with the constantly changing Deep Web, BrightPlanet has developed the...

Read more »
 

May 09, 2018


Revolution Analytics

Open-Source Machine Learning in Azure

The topic for my talk at the Microsoft Build conference yesterday was "Migrating Existing Open Source Machine Learning to Azure". The idea behind the talk was to show how you can take the open-source...

...
 

May 08, 2018


Revolution Analytics

In case you missed it: April 2018 roundup

In case you missed them, here are some articles from April of particular interest to R users. Microsoft R Open 3.4.4, based on R 3.4.4, is now available. An R script by Ryan Timpe converts a photo...

...

Datameer

Cloud Platforms for Analytics: The House Brand Ain’t Always Enough – conclusion

Last week we looked at industry trends that have contributed to the creation of market demand for public cloud solutions, what’s in the cloud analytics stack and Amazon integration pairs. If you...

...
Ronald van Loon

How Deep Learning Will Change Customer Experience

Deep learning is a sub-category within machine learning and artificial intelligence. It is inspired by and based on the model of the human brain to create artificial neural networks for machines. Deep learning will allow machines and devices to function in some ways as humans do.

Dr. Rodrigo Agundez of GoDataDriven is co-author of this article and very enthusiastic about the improvements that deep learning can offer. He’s been involved in the data science and analysis field for some time, and is already working on implementing models for practical applications.

Rodrigo notes that the new generation of users wants to interact with devices and appliances in a human-like manner. Take the example of Apple’s Siri, which allows for voice command and voice recognition. Communicating with Siri is similar to interacting with a human.

The user interface for Siri seems simple enough. However, the A.I. algorithms that are designed on the back-end are quite complex.

Designing this kind of interaction with a machine was not possible a few years ago. System designers now have access to complex deep learning algorithms that makes it possible to integrate such behavior into machines.

Importance of Deep Learning

Artificial Intelligence will never truly come of age without giving machines the powerful capabilities of deep learning.

The idea of designing deep learning models can be difficult to grasp for many people. This is because understanding human concepts comes naturally to us. But giving the same ability to machines is a very complex process of design.

One way to do it is by structuring data in a way that makes it easier to process for machines. Take the word “fat” for instance. If we say to a friend, “This burger has too much fat,” they would understand what we mean and the word would have a negative connotation here. But if we told a friend that “I would love to get fat from this meal any day,” the word would mean something entirely different.

Creating machines that are capable of understanding minute differences in words embedded in a context may seem like a very small thing, but requires a very large set of data and complex algorithms to execute.

Difference from Traditional Machine Learning

One way to differentiate between traditional machine learning and deep learning is through the use of features. These are the characteristics of the data that help us differentiate and identify one entity from another.

To understand features better, take the example of a normal bank transaction. Features of the transaction help us identify the timing of the transaction, the value transferred, names of the parties to the transaction, and other important information.

In a traditional machine learning model, features have to be designed by humans. In a deep learning model, features are identified by the A.I. itself.

We can take another example of differences between a cat and a dog. If we showed a person a cat and a dog and asked them to point to the cat, they would immediately identify it. However, if the same person was asked to identify the exact features that differentiate the two, they would have a problem. Both creatures have four legs, a body, a tail, and a head. They appear very similar in terms of features. Humans can distinguish one from the other in an instant. Yet, they would have trouble identifying the features that differentiate any pair of a cat and a dog.

This is a problem that data scientists and A.I. developers hope to solve with deep learning. Features can be found even in unstructured data with the help of deep learning algorithms.

Benefit from Deep Learning for the Customer Experience

Rodrigo states that deep learning models are superior at certain A.I. characteristics than any traditional machine learning models, as the models has shown its effectiveness. This can be traced back to 2012 where in a known online image recognition challenge, a deep learning algorithm proved to be twice as effective as any other algorithm before.

If an A.I. model reaches an accuracy of 50%, the device would not be very practical for use. Take the example of automobiles. A person would not trust getting in a car where brakes work 50% of the time.

However, if the accuracy of an A.I. system reaches values around 95%, it would be much more reliable and robust for practical use. Rodrigo believes that this level of accuracy for human-like tasks can only be achieved with deep learning algorithms.

Deep learning can be applied to speech recognition to improve customer experience. Speech recognition technology has been around for quite some time, but it didn’t cross the accuracy boundary to become a marketable product until the introduction of deep learning models.

Home automation systems and devices work through voice command. This is an area where deep learning can significantly improve customer experience.

Royal FloraHolland Case

Royal FloraHolland is the biggest horticulture marketplace and knowledge center in the world. An essential part of their process is having the correct photographs of the flower or plants uploaded by suppliers. These photos need to have a plant, some images require a ruler to be visible or a tray to be present.

The task of sorting through all these photographs manually and quickly is basically impossible, therefore it was decided to implement A.I. for the process.

GoDataDriven designed a system with deep learning algorithms to automate the checking of the images. The system can accurately identify and sort pictures taken from different angles and devices.

The system removed the need for manual human review and completely automated the process for the company.

University Medical Centrum Groningen (UMCG)

Deep learning algorithms were developed for UMCG with collaboration from GoDataDriven, Google and Siemens. This involved the use of MRI data in a 4D format (volume + time). Using deep learning models, the team calculated the heart ventricles volumes evolution over time.

One of the project goals is to assist in the decision making regarding pace makers and treatments. For example, it could take the heart cycle and volumes into consideration for prognosis and heart failure.

More than 400 images were taken per patient for different hearth depths across time. The team at GoDataDriven and Siemens developed multiple models, including binary and multi-class segmentation.

The model based on the U-Net deep learning architecture takes the MRI scan as input and outputs the corresponding volumes.

Traditionally, the process is done manually by looking at the scans and interpreting the results through hand-drawn diagrams.

Future of Deep Learning

Deep learning provides a way for companies to develop life-long learning modules. When more complex and richer algorithms are developed on top of pre-existing ones, companies will be able to achieve incremental growth.

Rodrigo believes that deep learning has a bright future because of its open source community and accessible platforms. Major corporations such as Apple which had built their systems on secrecy are finally coming around to the open-source model.

The main reason they are switching now is because they find deep learning talent acquisition more difficult in comparison with open source companies, such as Google’s Deep Mind for example. A company could have developed the most amazing and efficient deep learning system but if they don’t publish their research and share the knowledge online, talented data scientists and deep learning practitioners will not apply to this companies.

Currently deep learning teams like Google Brain, Google Deep Mind and companies like Facebook and Baidu find it much easier to hire talented deep learning practitioners. They continuously publish research and open source the related implementations, such that the deep learning is reminded that these companies are at the cutting edge of these technologies.

Since the shift is towards open source and global adaptation of this technology, deep learning is likely to do well in the future and impact vast sectors of in our society. To learn more about Deep Learning and join the Dutch Data Science week  click here.

About the Authors

Dr. Rodrigo Agundez

Rodrigo Agundez is Data Scientist and Deep Learning specialist for GoDataDriven. Rodrigo has worked as a consultant in numerous artificial intelligence projects and has given multiply deep learning trainings and workshops inside and outside The Netherlands. If you would like to know more about the exciting world of deep learning don’t hesitate to contact him via LinkedIn or Twitter.

 

Ronald van Loon

Ronald van Loon is Director at Adversitement, and an Advisory Board Member and Big Data & Analytics course advisor for Simplilearn. He contributes his expertise towards the rapid growth of Simplilearn’s popular Big Data & Analytics category.

If you would like to read more from Ronald van Loon on the possibilities of Big Data and the Internet of Things (IoT), please click “Follow” and connect on LinkedInTwitter, and YouTube.

Ronald

Ronald helps data driven companies generating business value with best of breed solutions and a hands-on approach. He has been recognized as one of the top 10 global influencers by DataConomy for predictive analytics, and by Klout for Data Science, Big Data, Business Intelligence and Data Mining and is guest author on leading Big Data sites, is speaker/chairman/panel member on national and international webinars and events and runs a successful series of webinar on Big Data and on Digital Transformation. He has been active in the data (process) management domain for more than 18 years, has founded multiple companies and is now director at a Data Consultancy company, leader in Big Data & data process management solutions. Broad interest in big data, data science, predictive analytics, business intelligence, customer experience and data mining. Feel free to connect on Twitter or LinkedIn to stay up to date on success stories.

More Posts - Website

Follow Me:
TwitterLinkedIn

Author information

Ronald helps data driven companies generating business value with best of breed solutions and a hands-on approach. He has been recognized as one of the top 10 global influencers by DataConomy for predictive analytics, and by Klout for Data Science, Big Data, Business Intelligence and Data Mining and is guest author on leading Big Data sites, is speaker/chairman/panel member on national and international webinars and events and runs a successful series of webinar on Big Data and on Digital Transformation. He has been active in the data (process) management domain for more than 18 years, has founded multiple companies and is now director at a Data Consultancy company, leader in Big Data & data process management solutions. Broad interest in big data, data science, predictive analytics, business intelligence, customer experience and data mining. Feel free to connect on Twitter or LinkedIn to stay up to date on success stories.

The post How Deep Learning Will Change Customer Experience appeared first on Ronald van Loons.

 

May 07, 2018

InData Labs

AI at the Forefront of Digital Transformation Process in 2018

Digital transformation definition Digital transformation has been a big topic for a few years now, and it has many definitions. From a business perspective, digital transformation is about leveraging digital technologies to improve processes, competencies, and business models. It is also about changing the culture of the company because it requires letting go of old...

Запись AI at the Forefront of Digital Transformation Process in 2018 впервые появилась InData Labs.

 

May 06, 2018


Simplified Analytics

Brand building with Digital Technologies

Which famous business brands do you use daily & give you happy moments? Top names come to our mind are Apple, Disney, Coca-Cola, McDonald's, Cadburys which are occupying most part...

...
 

May 04, 2018


Revolution Analytics

Because it's Friday: The eyes don't work

Spring has finally arrived around here, and a recent tweet reminded me of the depths of fear that Spring brought to me as 7-year old me back in Australia: swooping magpies. These native birds,...

...
 

May 03, 2018


Datameer

Cloud Platforms for Analytics: The House Brand Ain’t Always Enough

Today’s leading cloud platforms include numerous components for storing, processing and analyzing large volumes of data.  All the basics are there: storage, analysis and processing, streaming data...

...
Ronald van Loon

How Mobile AI Will Transform Our Lives

The age of Artificial Intelligence (AI) is almost upon us. Rapid developments in machine learning have allowed us to build better, smarter machines that are capable of making decisions and handling tasks similar to humans.

Some of these developments are also being implemented in mobiles to create the next generation of smarter phones. I attended the recent Huawei Global Analyst Summit in Shenzhen to speak with the heads of Huawei’s development teams and find out more about the future of AI in mobiles.

AI in Your Mobile Will Change the Way You Live

Huawei is a leading brand in mobile phone technology. Their Honor and P series are quite popular with mobile phone buyers, while the flagship Mate series remains one of the most sold phones successively.

The company has started gaining wider interest among phone buyers after introduction of new and unique technologies. This has helped the company dig into the market share of brands such as Apple, Samsung, and Nokia.

Felix Zhang, the Vice President of consumer software engineering, and James Lu, Director of AI Product Management, Huawei Consumer Business Group are very optimistic about the company’s ability to add AI capabilities to smart phones. It would be the next shift in technology for the phone industry and fundamentally change people’s lives the same way that steam engines did more than a hundred years ago.

The new Mate 10 Pro uses an AI chip, called the Kirin 970 chip, that substantially increases the performance, processing power, and camera capabilities of the phone. Zhang noted how the company’s mobiles are capable of recording professional quality videos and taking photos even for those people who have no idea how to operate a camera.

The built-in mobile assistant makes it possible for the phone to act as a guide. It can be integrated with search engines to help answer any questions that people may have.

The Mobile Phone Has Changed Our Lives

The development and progress of mobile phones from their earliest prototype to the latest model have already changed our lives in a major way.

Some of the first feature phones introduced in 1998 allowed people the ability to communicate while on the go. This removed the need to remain stationary at one place and gave people more freedom to contact one another while travelling.

The next major change came ten years later with the development of the first touch-screen smart phone. These phones brought the computing power of a workstation and online connectivity to people on the move. Tablets and touch pads were developed shortly afterwards, allowing people the potential to develop their own unique apps without being tied to a seat.

Ten years later, we are once again witnessing another major development in the world of mobiles. Zhang believes that the next generation of intelligent phones will bring countless services to the users’ fingertips that we couldn’t even imagine were possible only a few years ago.

Intelligent Phones

The next generation intelligent phones have made rapid progress in a very short amount of time. Huawei launched their first generation model in 2016 which was AI powered.

The company made quick progress and released the second generation model the next year which featured world’s first AI processor, which significantly improved phone performance. The phone also had a face recognition feature.

The fourth generation phone features master photography powered by AI, which can be operated even by the people who don’t know how to operate the phone.

The Role of Mobile AI

 Source: Huawei Global Analyst Summit 2018

AI improves the function of mobiles by enabling natural user interaction with the phone. Most of this interaction takes place through voice commands and recognition as well as the camera.

A mobile phone is capable of capturing and recognizing images that are farther away than the human eye can see. The AI enabled phone marketed by Huawei is already capable of seeing and recognizing text as well as faces.

In the next stage of software and hardware development, the company aims to make their mobiles capable of determining patterns and making sense of input being received through the microphone and camera.

Zhang believes that the AI progress for mobile phones will take place in two main domains. First, the developers will need to improve communication efficiencies between the users and their phones over voice, image, video and sensor upgrades.

The second domain involves better apps, content, functionality and third-party features.

Zhang stressed that Huawei is committed to developing smart devices of today into intelligent devices of tomorrow by creating end-to-end capabilities and supporting the development of components, devices, and software.

The Future of Mobile AI

Source: Huawei Global Analyst Summit 2018

The future of mobile AI is rapidly progressing. Businesses involved in the component manufacture and app development for the mobile phone industry aim to make improvements in the following areas.

Mobile Sensing

Better components and hardware features improve the ability of a mobile device to gather information from its surrounding environment. Previously, the phone camera was just a way to capture images and record videos, while the microphone was a way for the user to communicate during calls.

In the mobile phone of the next generation, the camera and microphone will act as the eyes and ears of the intelligent phone. These components are expected to give the phone the ability to become aware of the world around it and make recommendations for its users’ benefit.

Add the face recognition and GPS location feature to the mix and we come very close to a device that can understand its users’ wants and act as an assistant rather than just a communication device.

The face recognition feature is particularly useful, as it would give the phone the ability to recognize the user’s emotions. The device would know when the user is sad, happy, or hungry. The user would be able to program it to order food or automatically shop online.

Self Learning

AI developers are focused on building devices that improve their functionality through self-learning over a period of time. A user’s preferences, likes, and dislikes can be uploaded to a cloud network, and the mobile can access it to improve its understanding about its owner.

For example, complex algorithms would allow the device to understand the user’s choice in music, food, or movies. Consider a user logging trying to watch a movie through Netflix. The device would first connect with the database of movies on the network and then make recommendations based on its knowledge about the user.

Huawei R&D Approach

Huawei is very focused on the development of AI for its next generation of mobile devices. The company has more than 15 AI development centers all around the world. They follow an open R&D system, and the company’s research network is open to partners looking for joint innovation.

Many app developers and software companies are working together with Huawei to design systems and smart apps. These apps would add new possible features that make use of the hardware of intelligent phones of the future.

Conclusion

Despite the recent progress, AI is still in its initial stage of practical application. Mobile AI presents us with a lot of opportunities, as these handheld devices can dramatically improve the user experience and interaction in their daily life.

These changes are very likely to impact almost anything that you can see or hear. Mobile AI will transform our life significantly in the coming years.

About the Author

Ronald van Loon is Director at Adversitement, and an Advisory Board Member and Big Data & Analytics course advisor for Simplilearn. He contributes his expertise towards the rapid growth of Simplilearn’s popular Big Data & Analytics category.

If you would like to read more from Ronald van Loon on the possibilities of Big Data and the Internet of Things (IoT), please click “Follow” and connect on LinkedInTwitter, and YouTube.

Ronald

Ronald helps data driven companies generating business value with best of breed solutions and a hands-on approach. He has been recognized as one of the top 10 global influencers by DataConomy for predictive analytics, and by Klout for Data Science, Big Data, Business Intelligence and Data Mining and is guest author on leading Big Data sites, is speaker/chairman/panel member on national and international webinars and events and runs a successful series of webinar on Big Data and on Digital Transformation. He has been active in the data (process) management domain for more than 18 years, has founded multiple companies and is now director at a Data Consultancy company, leader in Big Data & data process management solutions. Broad interest in big data, data science, predictive analytics, business intelligence, customer experience and data mining. Feel free to connect on Twitter or LinkedIn to stay up to date on success stories.

More Posts - Website

Follow Me:
TwitterLinkedIn

Author information

Ronald helps data driven companies generating business value with best of breed solutions and a hands-on approach. He has been recognized as one of the top 10 global influencers by DataConomy for predictive analytics, and by Klout for Data Science, Big Data, Business Intelligence and Data Mining and is guest author on leading Big Data sites, is speaker/chairman/panel member on national and international webinars and events and runs a successful series of webinar on Big Data and on Digital Transformation. He has been active in the data (process) management domain for more than 18 years, has founded multiple companies and is now director at a Data Consultancy company, leader in Big Data & data process management solutions. Broad interest in big data, data science, predictive analytics, business intelligence, customer experience and data mining. Feel free to connect on Twitter or LinkedIn to stay up to date on success stories.

The post How Mobile AI Will Transform Our Lives appeared first on Ronald van Loons.

 

May 02, 2018

Knoyd Blog

GDPR... And Why It Only Cures Symptoms

The new General Data Protection Regulation is coming into effect at the end of May... and boy, will affect all of us. I have decided to share my opinion on the issue.

Everybody is doing it after all.


In case you have been living under a rock (or if you are just not that interested), these are the new (very simplified) rights of every EU citizen regarding their private data:

  1. Right for Consent - you have the right to opt-out of any private data collection

  2. Right for Portability - you have the right to download all the data a company has collected about you

  3. Right for Rectification - you have the right to correct any piece of information about yourself

  4. Right to be Forgotten - you have the right to request deletion of all the data concerning you.


There are plenty of articles, blog posts and guides on what the GDPR will mean and how to prepare your startup/enterprise. All companies are spending valuable resources on implementing necessary changes and making sure they are compliant.

What very few people talk about though, is what GDPR isn’t and why it isn't a solution for everything.

The four points above are undoubtedly personal freedoms that should be available to every person in the world (not just the EU). In case of a valid reason, they should definitely exist. However, it is my personal opinion that the way GDPR is being talked about and perceived by the general public sounds similar to if we would expect the police to guard all of our houses so we can leave them unlocked. It gives people the feeling that their data privacy is being handled by a higher authority and that they can live in peace, browsing away on all their free internet services.

The existence of these rights in the current world is only possible, however, if the number of people that actually enforcing the GDPR rights is not very high. Well, not significantly high enough to influence the business model of the companies that it relates to. Each and every internet user today should be aware that Facebook and Google (and Spotify and hundreds of other services) are free to use, but for a reason. A famous quote popularised by Andrew Lewis (blue_beetle) suggests:

’If you are not paying for it, you're not the customer; you're the product being sold.’ 

Most of the free services available to all of us today live off of advertising (or some other scheme of indirect monetisation of their user base). This is only possible if all of us participate and a critical user mass exists. So what if every European user would opt-out of data collection? What then?

 


What happens If everyone enforces their GDPR rights?

In his recent congress testimony, Mark Zuckerberg hinted at the possibility of offering ad-free paid subscriptions to Facebook, with Josh Constine from TechCrunch reacting with an interesting piece on how much that could/would have to cost to cover Facebook’s current earnings from advertising. Would you pay 11$ a month to keep your Facebook account ad-free? What about if you had to pay a small amount for every Google search you make? Or pay for your private Gmail account? Would you do it to keep your online behavior anonymous? I certainly wouldn’t.

Not because I don’t see the value of data privacy. Neither because I am a Data Scientist and therefore ‘one of them’. It is because I understand that very few great things out there are for free. And none of them are online.


To Sum This Up:

I think GDPR is great. I think it was necessary and a long time coming. But people should know when and how to use their rights and understand the consequences of a paradigm change. Is it wrong that Cambridge Analytica stole and repurposed a whole bunch of user data? Of course, and I really hope they all go to jail.

But can we blame Facebook for the outcome of the US Presidential Elections? Hardly.

It is the people who lack the education to spot bullshit on the internet, who spread fear and become victims of confirmation bias who are to blame. It is all of us. My hope is that people will realize that the world is not the same anymore, before all the neat (and seemingly free) technology that we use everyday, will have changed.

I know I am trying.

Lukas (Co-founder @Knoyd)


Get in touch
 

April 30, 2018


Revolution Analytics

Microsoft R Open 3.4.4 now available

An update to Microsoft R Open (MRO) is now available for download on Windows, Mac and Linux. This release upgrades the R language engine to version 3.4.4, which addresses some minor issues with...

...

Revolution Analytics

Make a sculpture in LEGO from a photo, with R

The entrance to our office in Redmond in is adorned with this sculpture of our department logo, rendered in LEGO: We had fun with LEGO bricks at work this week. APEX is our internal team name, this...

...
 

April 27, 2018


Revolution Analytics

Because it's Friday: Every Wes Anderson Movie

I've found the Honest Trailers series a bit hit-and-miss: sometimes the virtual eyebrow arches just a bit too sharply. But this take on Wes Anderson's films is spot on, and actually makes for a...

...

Revolution Analytics

A maze-solving Minecraft robot, in R

Last week at the New York R Conference, I gave a presentation on using R in Minecraft. (I've embedded the slides below.) The demo gods were not kind to me, and while I was able to show building a...

...
 

April 25, 2018


Datameer

Moving to the Cloud: 5 Challenges, Countless Benefits

While IDC forecasts tremendous payback from cloud investments and spending is estimated to grow at more than six times (17%) the rate of general IT spending (4%) through 2020, adoption forecasts are...

...
 

April 24, 2018


Revolution Analytics

Big changes behind the scenes in R 3.5.0

A major update to R is now available. The R Core group has announced the release of R 3.5.0, and binary versions for Windows and Linux are now available from the primary CRAN mirror. (The Mac release...

...

Forrester Blogs

$122 Billion: The Marketing Technology and Services Investment Sticker Shock

At $90 Billion today and growing to $122 Billion by 2022, CMOs are pouring budgets into investments which align their organization’s operations with greater customer and experience focus. Planning...

...

Forrester Blogs

The Sorry State of Digital Transformation in 2018

Software eats the world, right? We’ve been saying that for how long now? (1997 in my case, I think.) And we’re still transforming? Yep. That’s taking so long is just an indication...

...

Forrester Blogs

Bank Of America Lowers Security, Removes One-Time Passwords At Payee Add/Change

With the latest change to the BofA online banking bill pay service (which added all sorts of unnecessary and distracting icons and ugly fonts), the bank decided to remove the one-time password...

...

Forrester Blogs

Stop Playing “Telephone” With Data And Analytics. Change The Game With The GQMD Framework

Too many firms today are playing a data-to-insights game of telephone. In the data analytics version, the insights you deliver fail to drive many actions that improve outcomes.  The insights tumble...

...

Simplified Analytics

How HR Analytics play in Digital Age

Today every company is acting on the digital transformation or at least talking about digital transformation. While it is important to drive it by analyzing customer behavior, it is extremely...

...
Ronald van Loon

What Does GDPR Mean For Your Business?

The European General Data Protection Regulations (GDPR) will come into force on May 25, 2018. These regulations will have a significant impact on existing data collection and analysis methods.

Many businesses have become reliant on customer data collection for marketing and product designing. These businesses would need to formulate a new strategy on how to keep their business operations going while dealing with the EU regulations.

The GDPR Regulations

The main objective of GDPR is to ensure that organizations implement strict privacy rules and stronger data security when it comes to protecting personal data. The regulations will make it mandatory to obtain consent from users before acquiring or using their personal data.

Organizations will also be required to inform their customers and users about the personal data that they are collecting and using. Data subjects will have the complete right to withdraw their consent at any time, and organizations will be required to delete the record where consent has been withdrawn.

Noncompliance with the regulations will result in hefty penalties. A company can be fined up to €20 million or 4% of its annual global turnover in extreme cases.

The Complexity of Acquired Data

Data acquired by businesses through the normal channels is usually in a complex form, and the process is completely automated. This presents two major problems for organizations.

Locating Customer Records

In theory, business organizations can become compliant with the new regulations by letting their customers know about their information that is being held by the company. Any data that customers want removed could be deleted.

In reality, the problem is that a majority of businesses may not even be aware that they are holding customer data or how to track it. Many would find it difficult to locate the exact customer information in their massive database or even in their paper files.

Problems in Data Processing

Businesses often rely on built-in models that extract relevant data fields from incoming customer information. Managing these processes will be a challenge for organizations looking to become compliant with the new regulations.

An organization would need customer consent to acquire and use their information. While some customers might be willing to share one set of information, others might be willing to share a different set. A third group might refuse to give consent at all.

This would make the data inconsistent. Any attempts to derive meaningful results or market trends would be similarly useless.

Solutions Available to Organizations

In order to stay compliant with the new legislation, business organizations would need to apply new techniques for collecting, storing, and processing of data. Some of the steps that should be taken by businesses are the following.

  • Inform clients and obtain consent prior to acquiring any personal data.
  • Update the company’s existing or new databases with procedures that allow access, transfer, and deletion of specific client details.
  • Properly document the company policy on collection and processing of client data and have it communicated to clients.
  • Store and process all personal data in a manner which complies with GDPR guidelines.
  • Implement security measures that protect the database from breaches.
  • Continuously monitor and manage the data to ensure that GDPR standards are being met.

The new regulations will come into effect next month, and there is not much time left for businesses to update their systems. The sooner they get started on their data collecting techniques, the better.

Protecting Client Data

The new regulations have two main components to them. The first is about obtaining customer consent for data acquisition. The second relates to ensuring that the acquired data remains protected and secure.

Last year, the U.S. credit rating agency Equifax was hacked. Reports suggest that private and sensitive details of more than 143 million users were stolen by hackers. And everybody has heard about the Facebook Cambridge Analytica data breach that affected 87 million users .

Data breaches like these can severely shake the trust of users on private and public organizations. In the example of Facebook, a large number of users closed their accounts and Facebook lost $ 50 billion in stock value. This is why the EU has made organizations liable for the security of data that they collect.Adding security to the data can be achieved in two ways; Data minimization and use of pseudonyms.

Data minimization reduces the database by only retaining the information that is absolutely necessary for processing. Using pseudonyms involves translating data into numbers and unidentifiable strings through encryption. Both the methods add increase security to the database and reduce risk to the business and their clients.

Upgrading the Technical Infrastructure

The new technical infrastructure for organizations would need to be compliant with the regulations. Businesses would need to let their customers decide what information is shared and stored by companies.

A comprehensive data governance solution would let an organization quickly sort through its records and delete customer information for which consent has been withdrawn.

It would also allow businesses to review their current processes of data collection and processing. Updating to a unified governance model would also make it easier to create documentation on personal data used by the organization. A company would need to share this document with customers to stay compliant with the new regulations.

Benefits of a Unified Governance Model

A unified data governance model allows businesses to achieve better insights about their customers while staying compliant with the new regulations. Without applying a holistic approach, a business can become susceptible to oversight on regulatory compliance as well as data breaches.

Innovations are being led by unified data governance solutions. These techniques enable an organization to retrieve information about data objects, their physical location, characteristics, and usage. The technology is expected to help improve IT productivity while meeting regulatory requirements.

Bob Nieme, the CEO of Datastreams, has more than a decade of experience in data collection and frameworks. He is very optimistic about the new approach of governed access to data sources. He believes that companies would gain three benefits from a unified governance approach.

  • It will help organizations comply with the new GDPR regulations and avoid penalties.
  • Obtaining customer consent will improve their trust and willingness to share their personal data.
  • Data governance would also reduce risks and improve security.

Planning for the Future in a GDPR Environment

While some organizations have taken steps to adapt to the changes, most businesses are not prepared for the May 25th deadline when GDPR goes into effect. Many of them are either not aware of the effects the changes will have or simply don’t know what to do about them.

In order to avoid fines and a troublesome litigation process in court, companies would need to implement data transformation systems as soon as possible. Advanced data collection and analytics capability would allow them to support proper data governance and management.

Organizations that start the process of upgrade sooner will be at an advantage. It will allow them to build competitive advantage over rival businesses. Organizations that give their customers control over their personal data will also improve customer experience and stand out as reliable businesses.

About the Authors

Bob Nieme

For over 15 years, Bob Nieme has been a Digital Transparency protagonist. In 2014, Bob was recognized as a Privacy by Design Ambassador by the Information and Privacy Commissioner of Ontario, Canada, and in 2013, he was admitted to the Advisory Board of the Department of Mathematics and Computer Science of Eindhoven University of Technology. Bob Nieme founded three leading data-technology companies: Adversitement specializes in data process management, O2MC I/O offers a prescriptive web computing framework, and Datastreams.io empowers data-driven collaboration by providing governed access to trusted data sources.

Ronald van Loon

Ronald van Loon is Director at Adversitement, and an Advisory Board Member and Big Data & Analytics course advisor for Simplilearn. He contributes his expertise towards the rapid growth of Simplilearn’s popular Big Data & Analytics category.

If you would like to read more from Ronald van Loon on the possibilities of Big Data and the Internet of Things (IoT), please click “Follow” and connect on LinkedInTwitter, and YouTube.

Ronald

Ronald helps data driven companies generating business value with best of breed solutions and a hands-on approach. He has been recognized as one of the top 10 global influencers by DataConomy for predictive analytics, and by Klout for Data Science, Big Data, Business Intelligence and Data Mining and is guest author on leading Big Data sites, is speaker/chairman/panel member on national and international webinars and events and runs a successful series of webinar on Big Data and on Digital Transformation. He has been active in the data (process) management domain for more than 18 years, has founded multiple companies and is now director at a Data Consultancy company, leader in Big Data & data process management solutions. Broad interest in big data, data science, predictive analytics, business intelligence, customer experience and data mining. Feel free to connect on Twitter or LinkedIn to stay up to date on success stories.

More Posts - Website

Follow Me:
TwitterLinkedIn

Author information

Ronald helps data driven companies generating business value with best of breed solutions and a hands-on approach. He has been recognized as one of the top 10 global influencers by DataConomy for predictive analytics, and by Klout for Data Science, Big Data, Business Intelligence and Data Mining and is guest author on leading Big Data sites, is speaker/chairman/panel member on national and international webinars and events and runs a successful series of webinar on Big Data and on Digital Transformation. He has been active in the data (process) management domain for more than 18 years, has founded multiple companies and is now director at a Data Consultancy company, leader in Big Data & data process management solutions. Broad interest in big data, data science, predictive analytics, business intelligence, customer experience and data mining. Feel free to connect on Twitter or LinkedIn to stay up to date on success stories.

The post What Does GDPR Mean For Your Business? appeared first on Ronald van Loons.

 

April 23, 2018


Forrester Blogs

Sometimes Questions Are The Fastest Path To Solutions

By nature, people like to help. Anyone who has ever networked their way into a new job by doing a round of purposeful informational interviews knows this. And while the optimal outcome for that...

...
 

April 20, 2018


Forrester Blogs

Marketers, You Desperately Need A New Mindset

You have to feel bad for marketers: For years, they’ve tried to keep pace with changing buyer behaviors and adapt to the cross-device and cross-channel habits of their customers. To guide their...

...

BrightPlanet

All websites are not created equal. BrightPlanet knows how to harvest the exact data clients need, whether it is Deep Web, Dark Web or Surface Web content.

BrightPlanet provides terabytes of data for various analytic projects across many industries. Our role is to locate open-source web data, harvest the relevant information, curate the data into semi-structured content, and provide a stream of data feeding directly into analytic engines, data visualizations, or reports. In this blog series, we are going to be diving […] The post All websites are not created equal. BrightPlanet knows how to harvest the exact data clients need, whether it is...

Read more »

Revolution Analytics

AI, Machine Learning and Data Science Roundup: April 2018

A monthly roundup of news about Artificial Intelligence, Machine Learning and Data Science. This is an eclectic collection of interesting blog posts, software announcements and data applications I've...

...
 

April 19, 2018


Forrester Blogs

6 Metrics That Matter For B2B Marketers And 6 Bonus Fun Facts

In 2017, 92% of global B2B marketing decision makers said that improving the ROI of marketing or effectiveness of marketing would be among their top marketing initiatives over the 12 months that...

...
InData Labs

5 Things you Must Consider to Maximize the Value of your Company’s Predictive Analytics and Machine Learning Initiatives

Investigating company data for insights is a well known and widely adopted practice. However, using predictive analytics and machine learning is the next frontier in data analysis. The ability to predict future outcomes is what sets predictive analytics apart from other analytics used today, such as descriptive analytics, which gives answer to the question “what...

Запись 5 Things you Must Consider to Maximize the Value of your Company’s Predictive Analytics and Machine Learning Initiatives впервые появилась InData Labs.

 

April 18, 2018


Revolution Analytics

Uber overtakes taxis in New York City

In an update to his analysis of taxi and ride-share trips, Todd Schnieder reports that the number of daily Uber rides exceeds the number of taxi rides in New York City, as of November 2017. The data...

...

Datameer

GDPR Is Almost Here. Are Your Analytic Processes Ready?

The May 25, 2018 deadline for the General Data Protection Regulation (GDPR) is almost upon us. And the question many in management are asking is: Are we ready? The post GDPR Is Almost Here. Are Your...

...
 

April 17, 2018

Ronald van Loon

Google Deepmind: The Importance of Artificial Intelligence

Developments in Artificial Intelligence (A.I.) are happening faster today than ever before. However, the nature of progress in A.I. is such that massive technological breakthroughs might go unnoticed while smaller improvements get a lot of media attention.

Take the case of face recognition technology. The ability of A.I. to recognize faces might seem like a very big deal, but isn’t that groundbreaking when you consider the nature of applied A.I.

On the other hand, suppose an A.I. is asked to choose between a genre of music, such as R&B or rock. While it may seem like a simple choice, the mathematical algorithm that must be solved before the A.I makes a decision could take hours and days.

General A.I. vs. Advanced A.I.

Most people get their idea of A.I. from Hollywood movies and science fiction. They assume that A.I. robots would work and think in the same ways as human beings do. They tend to think of the Terminator or Data from Star Trek The Next Generation (TNG).

These fictional roles give us examples of A.I. that behave in a very general manner. The A.I. that developers are working on is actually much more advanced. This A.I. will be able to perform very complex calculations, but in a very limited field, while it will still be unable to perform some of the basic functions that humans perform.

General A.I. Is Not Very Profitable

Take the example of a dishwasher that is programmable and cleans dishes very well. A dishwashing manufacturer may program it to respond to voice command and play music as well. The dishwasher may learn the general times when you have dinner and improve its washing quality based on your preferences.

But would it make sense or even be economical to teach this dishwasher how to recognize facial expressions and the mood of the operator?

A generalized A.I. would be able to perform many tasks, but it cannot be very good at each task. It would be more efficient economically to build machines that are specialized in particular tasks.

People would love seeing generalized A.I. machines in the store and find them entertaining, but no one would actually buy them for the chores at home. This is why A.I. advanced in specific tasks is far more useful and gets a lot more attention from commercial developers.

A.I. and Machine Learning

The real task that lies ahead for developers is to build A.I. that is capable of learning, thinking, and feeling without input from a human. This kind of independent A.I. will be capable of making decisions on its own, and can be considered truly smart.

Before this A.I. is completed and ready for practical applications, it will have performed millions of simulations on its neural network, which would help it improve its actions in the real world.

The A.I. does this by making repeated computations and recording the result on each stage of its learning process. Once the A.I. finds the first correct solution, it runs the test from the second stage, making repeated calculations until it finds the best solution. Once this solution is found, the A.I. begins testing solutions from the third stage.

Using this approach on some old video games, developers were able to get amazing results. They tested the model on old classics such as Hungry Snake, where the A.I. learned how to play the game by making the correct left or right movement to grow to the maximum level which would be very difficult to achieve for even the most expert of human players.

The model was tested on PC Snooker, where the A.I. was able to determine brilliant pool shots that gave it a near perfect score.

Google DeepMind

Deepmind is one of the leading A.I. development firms in the world. It started in 2010, and was acquired by Google in 2014. The firm has been at the forefront of technological breakthroughs in the world. Google’s access to a very large database has also allowed the researchers at the firm to test a number of Artificial Intelligence concepts.

Testing Game Choices with A.I.

Video games are incredibly useful in testing and improving A.I. learning. Most video games are developed for humans, and have a learning curve. While humans are able to quickly learn and become good at games due to our intuition, A.I. usually starts from scratch and performs poorly in the beginning.

The research team at Deepmind used its DMLab-30 training set, which is built on ID Software’s Quake III game. Using an Arcade learning environment that ran Atari games, the team developed a new training system for A.I called Importance Weighted Actor-Learner Architectures, or IMPALA for short.

IMPALA allows an A.I. to play a large number of video games really quickly. It sends training information from a series of actors to a series of A.I. learners. Instead of directly connecting the A.I. to the game engine, developers told the A.I. the result of its action from the controller inputs, just like a human player would experience the game.

The developers found the results to be quite good based on a single A.I. player’s interaction with the game world. How well the A.I. performs against human players is still under testing.

Most A.I. in games is disadvantaged against human players. Developers try to offset this by allowing A.I. certain advantages against humans. In arcade games this is done by giving the A.I. special powers that the human player does not possess. In strategy games, this is done by giving the A.I. extra resources.

A.I. that performs well against human players without any hidden benefits would truly be considered an amazing advancement.

Self-Teaching Robot Hands

Developments in neural networks have allowed robots to run millions of simulations or run complex calculations faster than humans can. Yet, when it comes to figuring out physical things, A.I. robots still struggle, because they have a number of infinite possibilities to choose from.

In order to counter this problem, DeepMind created an innovative paradigm for A.I. powered robots. Scheduled Auxiliary Control (SAC-X) gives robots a simple task such as “clean up this tray,” and rewards it for completion of the task.

The researchers don’t provide instructions on how to complete the task. That is something that the robot A.I. hand must figure out on its own.

The developers believe that progress in performing physical and precise tasks will lead to the next generation of robots and A.I.

Understanding Thought Process

Researchers at DeepMind are also looking at ways to making the AI understand how humans use reasoning and make sense of things around them.

Humans have the intuitive ability to evaluate the beliefs and intentions of others around them. This is a trait shared by very few creatures in the animal kingdom. For instance, if we see someone drinking a glass of water, we can infer that the person was thirsty and water can quench their thirst.

The ability to understand these abstract concepts is called the “theory of mind,” and plays a crucial role in our social interactions. Developers at DeepMind performed a simple task.

They first allowed an A.I., ToM-Net, to observe an 11-by-11 grid, which contained four colored objects and a number of internal walls. A second A.I. was given the task of walking to a specific colored square. It also had to pass by another color on the way.

While the second A.I. would try to complete the task, the developers would move the initial target. They then asked the ToM what the second A.I. would do.

ToM was correctly able to predict the actions of the second A.I., based on the information that was given to it.

About The Author

If you would like to read more from Ronald van Loon on the possibilities of Artificial IntelligenceBig Data, and the Internet of Things (IoT), please click “Follow” and connect on LinkedInTwitter and YouTube.

Ronald

Ronald helps data driven companies generating business value with best of breed solutions and a hands-on approach. He has been recognized as one of the top 10 global influencers by DataConomy for predictive analytics, and by Klout for Data Science, Big Data, Business Intelligence and Data Mining and is guest author on leading Big Data sites, is speaker/chairman/panel member on national and international webinars and events and runs a successful series of webinar on Big Data and on Digital Transformation. He has been active in the data (process) management domain for more than 18 years, has founded multiple companies and is now director at a Data Consultancy company, leader in Big Data & data process management solutions. Broad interest in big data, data science, predictive analytics, business intelligence, customer experience and data mining. Feel free to connect on Twitter or LinkedIn to stay up to date on success stories.

More Posts - Website

Follow Me:
TwitterLinkedIn

Author information

Ronald helps data driven companies generating business value with best of breed solutions and a hands-on approach. He has been recognized as one of the top 10 global influencers by DataConomy for predictive analytics, and by Klout for Data Science, Big Data, Business Intelligence and Data Mining and is guest author on leading Big Data sites, is speaker/chairman/panel member on national and international webinars and events and runs a successful series of webinar on Big Data and on Digital Transformation. He has been active in the data (process) management domain for more than 18 years, has founded multiple companies and is now director at a Data Consultancy company, leader in Big Data & data process management solutions. Broad interest in big data, data science, predictive analytics, business intelligence, customer experience and data mining. Feel free to connect on Twitter or LinkedIn to stay up to date on success stories.

The post Google Deepmind: The Importance of Artificial Intelligence appeared first on Ronald van Loons.

 

April 16, 2018


Revolution Analytics

News from the R Consortium

The R Consortium has been quite busy lately, so I thought I'd take a moment to bring you up to speed on some recent news. (Disclosure: Microsoft is a member of the R Consortium, and I am a member of...

...
 

April 13, 2018


Revolution Analytics

Because it's Friday: The borders have changed. Film at 11.

There's a lot of stupidity in the US news these days, but at least these reviews of bad maps on TV are amusing rather than infuriating. (Click through to see the entire thread.) That's Iran in the...

...
 

April 12, 2018


Revolution Analytics

The case for R, for AI developers

I had a great time this week at the Qcon.ai conference in San Francisco, where I had the pleasure of presenting to an audience of mostly Java and Python developers. It's unfortunate that videos won't...

...
 

April 11, 2018


Datameer

Data Challenges for New Age Analytics in Retail

Having spent some time in the retail business, specifically apparel, and at a company that focused on helping e-retailers, I have an appreciation for the challenges these organizations face. With the...

...
 

April 10, 2018


Revolution Analytics

Statistics from R-bloggers

Tal Galili's R-bloggers.com has been syndicating blog posts about R for quite a while — from memory I'd say about 8 years, but I couldn't find the exact date it started aggregating. Anyway, it...

...
 

April 09, 2018


Revolution Analytics

In case you missed it: March 2018 roundup

In case you missed them, here are some articles from March of particular interest to R users. The reticulate package provides an interface between R and Python. BotRNot, a Shiny application that uses...

...
 

April 07, 2018


Jeff Jonas

Democratizing Entity Resolution.

In August of 2016, my team and I spun the G2 technology out of IBM. Into stealth mode we went, again. We are now back out of stealth mode and set to democratize Entity Resolution (yes, I am starting...

...
 

April 06, 2018


Revolution Analytics

Because it's Friday: Regex Games

I've been wrestling with regular expressions recently, so it was useful to give myself a bit of a refresher with Regex Crossword (with thanks to my colleague Paige for the tip). Little...

...

Revolution Analytics

A few podcast recommendations

After avoiding the entire medium for years, I've been rather getting into listening to podcasts lately. As a worker-from-home I don't have a commute (the natural use case of podcasts, I guess), but I...

...

Datameer

Four Ways to Overcome Data and Analytic Challenges in the Insurance Industry

The insurance industry, in particular the property and casualty, life and annuity, and re-insurance sectors, is fraught with very interesting data and analytics challenges. While there is vast...

...

Datameer

Datameer and IBM Cloud Private for Data

Many CEOs see Artificial Intelligence (AI) and Machine Learning (ML) as a key component to gaining competitive advance in their respective marketplaces. A 2017 survey of Fortune 500 CEO’s found that...

...

Datameer

Five Rules of Data Exploration

As always, one always learns something new at the Gartner Data and Analytics Summit (the 2018 North America version held last week in Grapevine, Texas). I attended a fascinating session with two of...

...
 

April 05, 2018

InData Labs

How to Design Better Machine Learning Systems with Machine Learning Canvas

Machine Learning Canvas is a template for designing and documenting machine learning systems. It has an advantage over a simple text document because the canvas addresses the key components of a machine learning system with simple blocks that are arranged based on their relevance to each other. This tool has become popular because it simplifies the visualization of a complex project and helps to start a structured conversation about it.

Запись How to Design Better Machine Learning Systems with Machine Learning Canvas впервые появилась InData Labs.

 

April 04, 2018


Revolution Analytics

Not Hotdog: An R image classification application, using the Custom Vision API

If you're a fan of the HBO show Silicon Valley, you probably remember the episode where Jian Yang creates an application to identify food using a smartphone phone camera: Surprisingly, the app in...

...
decor