I will provide motivation for the technique, a python implementation of it and finally some benchmarks.
From a pragmatic perspective, EMDE is just an algorithm for compressing sets of vectors into a single fixed-width vector.
Aggregating a set of vectors into a single vector may seem like a pretty esoteric requirement but it is actually quite common. It often arises when you have one kind of entities (users) interacting with another (items, merchants, websites).
Let’s say you already have a vector representation (embedding) of every food item in the world. Like any good embedding, it captures the metric relations between the underlying objects - similar foods are represented by similar vectors and vice versa. You also have a list of people and the foods they like. You would like to leverage the food embeddings to create embeddings of the people - for the purpose of recommendation or classification or any other person-based ML task.
Somehow you want to turn the information that Alice likes salami, chorizo and Hawaiian pizza (mapped to vectors v1, v2, v3) into a single vector representing Alice. The procedure should work regardless of how many food items a person likes.
Another way of looking at the same problem - and one taken by the authors of the EMDE paper - is as a problem of estimation of a density function in the embedding space.
Instead of thinking of foods as distinct points in space, we can imagine a continuous shape - a manifold - in the embedding space whose every point corresponds to a food - real or potential. Some of the points on this manifold are familiar - an apple or a salami. But between the familiar ones there is a whole continuum of foods that could be. An apple-flavored salami? Apple and salami salad? If prosciutto e melone is a thing, why not salami apple?
In this model we can think of Alice’s preferences as a probability density function defined on the manifold. Maybe function’s value is highest in the cured meats region of the manifold, lower around pizzas and zero near the fruits. That means Alice likes salami, chorizo, pepperoni and every other similar sausage we can invent but she only likes some pizzas and none of the fruits.
This density function is latent - we can’t measure it directly. All we know is the handful of foods that Alice explicitly liked. We can interpret these items as points drawn from the latent probability distribution. What we’re trying to do is use the sample to get an estimate of the pdf. The reason this estimation is at all possible is that we believe the function is well-behaved in the embedding space - it doesn’t vary too wildly between neighbouring items. If Alice likes salami and chorizo, she will also probably like other similar kinds of sausage like pepperoni.
Viewed from this perspective, the purpose of EMDE is to:
The estimated parameters can then serve as a feature vector describing the user.
The most straightforward way of summarising a list of vectors is by taking their arithmetic average. That’s exactly what I have tried in my post from 2016. It worked okay-ish as a feature engineering technique but clearly a lot of detail gets lost this way. For instance, by looking at just the average vector, you can’t tell the difference between someone who likes hot dogs and someone else who only likes buns and frankfurters separately.
The average is just a summary statistic of a distribution - but what EMDE is trying to do is capture the full distribution itself (up to a finite but arbitrary precision).
The input to this algorithm consists of:
And the hyperparameters:
The output is a sparse embedding of each user.
The algorithm (the illustrations will use K=3 and N=4):
Start with the set of embeddings of all items.
Cut the space into regions (buckets) using random hyperplanes. The orientation of the hyperplanes is uniformly random and their position is drawn from the distribution of the item vectors. That means the planes always cut through the data and most often through regions where data is dense, never outside the range of data.
Assign numbers to the regions.
For each user count items in each bucket.
The sequence of numbers generated this way
1
|
|
is the desired summary of the user’s items (almost). It is easy to see that these numbers define a coarse-grained density function over the space of items - like so:
To get a more fine-grained estimate of the latent density, we need to repeat steps 2. and 3. N times and concatenate the resulting count vectors per user.
This sequence of numbers (the authors of the paper call it a “sketch” as it is a kind of a Count Sketch)
1
|
|
is the final output of EMDE (for one particular user).
The corresponding density function would look something like this:
Two important properties of this algorithm:
sketch({apple, salami}) = sketch({apple}) + sketch({salami})
.sketch({apple, salami}) ~ sketch({pear, chorizo})
.The authors of the EMDE paper suggest that sketch width = 128 (roughly corresponding to K=7) is a good default setting and one should spend one’s dimensionality budget on increasing N rather than K beyond this point.
But why bother with N sets of hyperplanes at all? Why not use all of them in one go (N=1, big K)?
The answer (I think) is that we don’t want the buckets to get too small. The entire point of EMDE is to have multiple similar items land in the same bucket - otherwise it’s just one-hot encoding. OHE is not a bad thing in itself but it’s not leveraging the embeddings anymore.
Having large buckets (small K), on the other hand leads to false positives - dissimilar items landing in the same bucket - but we mitigate this problem by having N overlapping sets of buckets. Even if bananas and chorizo end up in the same bucket one of the sets, they probably won’t in the others.
That being said, I have tried lots of different combinations of K and N and can’t see any clear pattern regarding what works best.
Once trained on a set of vectors, EMDE can be used to transform any other sets of vectors - as long as they have the same dimension. However, in most applications, all the item vectors are static and known up front. The following implementation will assume that this is the case which will let us make the code cleaner and more efficient. I have included the more general, less efficient implementation here.
Thanks to additivity of sketches, to find the sketch of any given set of items it is enough to find the sketches of all the individual items and add them. Since all the items are know at training time, we can just pre-calculate sketches for all of them and simply add them at prediction time.
The following function pre-calculates sketches for all the items given their embeddings.
Linear algebra 101 reminder: a hyperplane is the set of points $\vec{x}$ in a Euclidean space that satisfy:
for some constant $\vec{v}$, $c$.
If $\vec{v} \cdot \vec{x} > c$ - then $\vec{x}$ lies to one side of the hyperplane. If $\vec{v} \cdot \vec{x} < c$ - it lies on the other side.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
|
Note that CountVectorizer
above makes sure that only the buckets with at least one vector in them are represented. As a result, the width of a single sketch which is at most $2^K$ ($2^K N$ for the full sketch), in practice is often much lower - especially for low dimensional embeddings.
Now, for convenience, we can wrap this up in a sklearn-like interface while adding the option to use tfidf weighting for items.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
|
It can be used like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
The result is:
1 2 3 4 5 |
|
Before passing this sketch to a ML model, you might want to normalize it row-wise. The paper suggests ‘L2’ normalization. I’ve had even better results with max norm:
1 2 3 |
|
To test the efficacy of the EMDE approach I used the good old text classification benchmarks - 20 Newsgroups and R8.
In both cases I trained Word2Vec on all the texts to get a word embedding and then used EMDE to generate a sketch for every document by aggregating the word embeddings. Then I trained and tested a logistic regression (with 5-fold cross validation) on the sketches as well as on the raw word counts and on averaged embeddings.
The results for R8:
1 2 3 4 5 6 7 8 9 10 11 |
|
And for 20 Newsgroups:
1 2 3 4 5 6 7 8 9 10 11 |
|
First of all - you’ll notice that all the EMDE dimensionalities tend to be lower for the R8 dataset than for 20 Newsgroups. That is because R8 is a much smaller dataset with fewer distinct words in it (23k vs 93k). Consequently you more often end up with an empty bucket - and those get dropped by CountVectorizer
.
As for the actual results:
K=30 N=1
is on top of one of the benchmarks. K=30 N=2000
wins in the otherIn conclusion - EMDE is simple, fast, efficient. It will make a great addition to the feature engineering arsenal of any data scientist.
]]>All the code can be found here. With this you can quickly get started embedding your own graphs.
Before we get started, here’s a motivating example: visualisation of the Movies Dataset from Kaggle.
The above embedding was based on a multi-relation graph of people working on movies (actors, directors, screenwriters, lightning, cameras etc.). The visualisation is the result of running UMAP on the embeddings of the most popular movies (ignoring embeddings of people which were a by-product).
And here’s the same set of movies but with a different embedding:
This embedding was based on the graph of movie ratings. The nodes correspond to movies and raters. There are 3 types of edges - ‘this user hated this movie’, ‘this user found this movie acceptable’, ‘this user loved this movie’ - corresponding to ratings 1 to 2.5, 3 to 3.5, 4 to 5 out of 5.
I encourage you to mouse over the graphs to reveal clusters of movies related by either overlapping cast and crew (first plot) or by overlapping fanbase (second plot). It’s quite fun.
Note that one could use either of these embeddings (or a combination of the two) as a basis for a movie recommender system.
Graph embeddings are a set of algorithms that given a graph (set of nodes connected by edges) produce a mapping node -> n-dimensional vector (for some specified n). The goal of embedding is for the metric relationships between vectors to reflect connections of the graph. If two nodes are connected, their embeddings should be close in vector space (under some metric), if they are not - the embeddings should be distant.
If successful, the embedding encodes much of the structure of the original graph but in a fixed-width, dense numeric format that can be directly used by most machine learning models.
Unlike their better known cousins - word embeddings - graph embeddings are still somewhat obscure and underutilised in the data science community. That must be in part because people don’t realise that graphs are everywhere.
Most obviously, when the entities you’re studying directly interact with each other - they form a graph. Think - people following each other on social media or bank customers sending each other money.
More common in real life applications are bipartite graphs. That’s when there are two kinds of entities - A and B - and As link with Bs but As don’t link with other As directly and neither do Bs with other Bs. Think - shoppers and items, movies and reviewers, companies and directors. Embedding these kinds of graphs is a popular technique in recommender systems - see for example Uber Eats.
Text corpora are graphs too! You can represent each document in a corpus and each word in a document by a node. Then you connect a document-node to a word-node if the document contains the word. That’s your graph. Embedding this graph yields a word embedding + document embedding for free. (you can also use a sliding window of a few words instead of full document for better results). This way you can get a good quality word embedding using graph embedding techniques (see e.g. this).
In short - graph embeddings are a powerful and universal feature engineering technique that turns many kinds of sparse, unstructured data into dense, structured data for use in downstream machine learning applications.
There are heaps of graph embedding algorithms to pick from. Here’s a list of models with (mostly Python) implementations. Unfortunately most of them are little better than some researcher’s one-off scripts. I think of them less as tools that you can pick up and use and more as a starting point to building your own graph embedder.
PyTorch BigGraph is by far the most mature of the libraries I have seen. It:
is CPU-based - which is unusual and seems like a wasted opportunity but it does make using it easier and cheaper And most importantly:
It even includes a distributed mode for parallelizing training on the cluster. Unless the nodes of your graph number in the billions though, IMHO it is easier to just spin up a bigger machine at your favourite cloud platform. In my experiments a 16 CPU instance is enough to embed a graph of 25m nodes, 30m edges in 100d in a few hours.
If you’re curious about
If PBG is so great why does it need a tutorial?
It seems to me that the authors were so focused on customizability that they let user experience take a back seat. Simply put - it takes way too many lines of code to do the simplest thing in PBG. The simplest usage example included in the repository consists of two files - one 108 and one 46 lines long. This is what it takes to do the equivalent of model.fit(data).predict(data)
.
I’m guessing this is the reason why the library hasn’t achieved wider adoption. And without a wide user base, who is there to demand a friendlier API?
I have wasted a lot of time before I managed to refactor the example to work on my graph. What follows is my stripped down to basics version of graph embedding that should work out of the box - the “Hello World!” - and one that you can use as a template for more complicated tasks.
I found another similar tutorial on Towards Data Science but the code didn’t work for me (newer version of PBG perhaps?).
The full code of the example, with comments, is here.
First thing to do is installing PBG. As of this writing, the version available on PyPi is broken (crashes on running the first example) and you have to install it directly from github:
1
|
|
Full requirements are here.
The graph we will be embedding consists of 4 nodes - A
, B
, C
, D
and 5 edges between them. It needs to be saved as a tab-separated file like so:
1 2 3 4 5 |
|
Before we can apply PBG to the graph, we will have to transform it to a PBG-friendly format (fortunately P BG provides a function for that). Before we do that, we have to define the training config. The config is a data structure holding all the settings and hyperparameters - like how many partitions to use (1 unless you want to do distributed training), what types of nodes there are (only 1 type), what types of edges between them etc.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
|
Next, we use the config to transform the data into the preferred format using a helper function from torchbiggraph.converters.importers.convert_input_data
function. Note that the config needs to be parsed first using another helper function because nothing is simple with PyTorch BigGraph.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
Having prepared the data, training is straightforward:
1
|
|
Important note: the above code (both data preparation and training) can’t be at the top level of a module - it needs to be placed inside a if __name__ == '__main__':
block or some equivalent. This is because PTBG spawns multiple processes that import this very module at the same time. If this code is at the top level of a module, multiple processes will be trying to create the same file simultaneously and you will have a bad time!
After training is done, we can load the embeddings from a h5 file. This file doesn’t include names of the nodes so we will have to look those up in one of the files created by the preprocessing function.
1 2 3 4 5 6 7 8 9 10 11 |
|
The final result will look something like this:
1 2 3 4 5 6 |
|
This is it!
The second example will feature PBG’s big selling point - the support for multi-relation graphs. That means graphs with multiple kinds of edges. We will also throw in multiple entity types for good measure.
Imagine if Twitter and eBay had a baby. Data genereated on this unholy abomination of a website might look something like this:
1 2 3 4 5 6 7 8 9 10 11 |
|
Here users follow other users as well as buy and sell items to each other. As a result we have two types of entities - users and items - and four types of edges - ‘bought’, ‘sold’, ‘follows’ and ‘hates’.
We want to jointly embed users and items in a way that implicitly encodes who is buying and selling what and following or hating whom.
We could do it by ignoring relation types and embedding it as a generic graph. That would be wrong because ‘follows’ and ‘hates’ mean something quite different and we don’t want to represent Bob and Dave as similar just because one of them follows Carol and the other hates her.
Or we could do it by separately embedding 4 graphs - one for each type of relation. But that’s not ideal either because we’re losing valuable information. In our silly example Alice would only appear in the graphs of “bought” and of “follows”. Dave only appears in graphs of “sold” and “hates”. Therefore the two users wouldn’t have a common embedding and it wouldn’t be possible to calculate distance between them. A classfier trained on Alice couldn’t be applied to Dave.
We can solve this problem by embedding the full multi-relation graph in one go in PBG.
Internally, PBG deals with different relation types by applying a different (learned) transaformation to a node’s embedding in the context of a different relation type. For example it could learn that that if A ‘follows’ B, they should be close in vector space but when A ‘hates’ B, they should by close after flipping the sign of all coordinates of A - i.e. they should be represented by opposite vectors.
From the point of view of a PBG user the only difference when embedding a multi-relation, multi-entity graph is that one has to declare all relation types and entity types in the config. We also get to chose a different transformation for each relation (though I can’t imagine why anyone would). The config dict for our Twitter/eBay graph would look like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
|
Once embedding is trained, the embeddings can be loaded the same way as with a generic graph, the only difference being that each entity type has a separate embedding file.
Full code is here.
Happy embdding!
]]>The typical data science project doesn’t make any sense whatsoever and should never have been attempted.
Data science has a huge solution-looking-for-a-problem situation going on. Enterprise managers trying to appear data-driven, startup founders wanting to impress investors with cool buzzwords and proprietary IP, young data scientists themselves itching to try the newest technique from a paper - there are a lot of people looking for an excuse to do ML/AI/DL. When they finally find it, they (or rather - we) don’t try too hard to see if it makes business sense. As a result, the majority of data science projects never move beyond the stage of slides and jupyter notebooks.
Here is my subjective, non-exhaustive list of types of nonsense data science:
By far the most common failure mode for a data science project is to never be productionised because of lack of infrastructure or lack of interest on the business side. These projects were only attempted because thought they sounded cool, in the complete absence of a realistic business case. This could have been avoided by asking a simple question before starting the project:
‘And then what?’
So you apply your DBSCAN on top of your vectors from Word2Vec to assign your customers to clusters - and then what?
Or you run sentiment analysis on all the comments on your website - and then what?
Or you train a GAN on all the images in your database - and then what?
‘How do we productionise the result? Do we have the infrastructure for it? What will the benefit be if we manage to do it?’
If the only answer is ‘and then we prepare slides to show to stakeholders’ - I suggest that we skip the ‘train the neural network’ bit and prepare the slides already. In the unlikely event that the stakeholders have a real use case for the classifier, we can start working on the use case immediately. Otherwise we move on to the next task having saved ourselves weeks, maybe months of unnecessary work.
Another, less blatant way for a data science project to not make sense is for it to be sort of useful but completely not worth the effort. Like training a bespoke deep learning model to analyse 20 pages of text. Or an image quality assessment tool that saves a real estate agent 5 seconds per 1h house visit.
The question I ask stakeholders (sometimes that means asking myself) to address this problem is:
‘How much is the solution to this problem worth to you? If it’s so valuable, why haven’t you paid people to do it manually before?’
The set of good answers to this question includes:
we have been doing it manually, automating it would save us £X/year
and
we could do it manually but being able to do it in real time would be a game-changer, worth £X.
A special subcategory of ‘obviously not worth it’ projects contains ones where a solution already exists in a commoditised form on AWS, GCP, Azure etc. Examples include OCR, speech to text, generic text and image classification, object detection, named entity recognition and more.
Trying to build (for instance) a better or cheaper OCR than the one Google is selling is first of all hopeless but more importantly a distraction from your actual business (unless you’re business is selling OCR, in which case good luck!).
I sometimes hear data scientists complaining that it’s no fun calling APIs for everything and they would rather build ML models themselves. I disagree. For one, I find solving an already solved problem depressing. Secondly, outsourcing the most generic ML tasks frees up your time to do higher-level tasks and tasks specific to your business. If you really have nothing to do in your company except for reinventing the wheel then you’re in the wrong company.
The flipside of Busywork Data Science is Wishful Thinking Data Science. Attacking problems that it would be fantastic to have solved but which are obviously not solvable with the given data.
I most often see this kind of thing with predicting the future (which is the hardest period to predict).
Wouldn’t it be great to know the house price index/traffic on the website/demand for a product a year in advance? Can you fit your neural network/hidden markov model to the chart with historical data to make a forecast?
I can fit anything to anything but that won’t tell you much a hand-drawn trend line wouldn’t reveal. Next year’s house prices depend on a million different external political, economic and demographic factors that are either unpredictable or not predictable from price data alone. How the Prime Minister is going to handle Brexit is simply not something that can be divined from squiggly line of past house prices.
Sometimes projects like these are pitched by naive managers and CEOs who think AI is a magic dust you can sprinkle over a problem and make the impossible possible. More often it involves people who either know the prediction won’t work or don’t care enough to find out, their only concern being whether the technology will impress the customer.
This is when the client has a vaguely data-sciencey task but adamantly refuses to specify the objective or acceptance criteria.
- We need you to calculate a score for every company.
- Ok. What do you what this score to measure or predict?
- Dunno. Like, how good they are?
- Good in what way? Good to work at? Good to invest in? A credit rating maybe?
- No, nothing mundane like that.
- Then what?
- You’re the data scientist, we were hoping you would tell us.
- …
- Be sure to include Twitter data!
It’s a normal part of a data scientist’s job to act as a psychoanalyst helping the client discover and articulate what they actually want. But sometimes there is just nothing there to discover because the whole project is just an empty marketing gimmick or an exercise in bureaucratic box-checking.
In 1985 sci-fi comedy movie Weird Science a pair of teenagers make a simulation of a perfect woman on their home computer. After they hook the computer to a plastic doll and hack into a government system, a power surge causes the magical dream woman to come to life.
Today even small children and the elderly are familiar enough with computers to know they don’t work like that. But replace the government system with the cloud, throw in some deep learning references and you’ve got yourself a plausible 2019 movie premise.
Bullshit data science happens because decision makers have the level of understanding of and attitude towards data science the 1980s audiences had for computers. They have unrealistic expectations, are easily bamboozled by it, don’t know how to use it and don’t trust it enough to use where it would make a real difference.
This will eventually change the same way it did with computers in general. The current generation of data scientists will start graduating into management roles, founding their own startups, eventually retiring - same as happened with the programmers from the 1980s.
Until then, we are going to have to fight the bullshit however we can. For data scientists themselves that entails paying more attention to the ‘why’ of what they’re doing, not just the ‘how’. And for the clients the first step would be to involve an experienced and business savvy data scientist from the get go, to help shape what needs to be done instead of just carrying out (potentially nonsensical) orders.
]]>For the purposes of this post I define a data analyst as someone who uses tools like Excel and SQL to interrogate data to produce reports, plots, recommendations but crucially doesn’t deliver code. If you work in online retail and create an algorithm recommending tiaras for pets - I call you a data scientist. If you query a database and discover that chihuahua owners prefer pink tiaras and share this finding with the advertising team - you are a data analyst.
Let me get one thing out of the way first: this post is not bashing analysts. Of course data analyst’s work is useful and rewarding in its own right. And there is more demand for it (under various names) than there is for data science. But that is beside the point. The point is that a lot of people will tell you that taking a job as a data analyst is a good way to prepare for data science and that is a lie. In terms of transferable skills you may as well be working as a dentist.
A data analyst is not a larval stage of a data scientist. They are completely different species.
Data Analyst | Data Scientist |
---|---|
Sits with the business | Sits with engineers (but talks to the business) |
Produces reports, presentations | Produces software |
Interestingly, the part about sitting in a different place (often a different floor or a different building!) is the bigger obstacle to moving into data science. Independent of having or not having the right skills, a data analyst can’t just up and start doing data science because they don’t have the physical means to do it! They don’t have:
While those things can be eventually gotten hold of with enough perseverance, there are other deficits that even harder to make up for:
This should be obvious to anyone who has ever worked in a big company. You don’t simply walk into an software team and start making changes. It sometimes takes months of training for a new developer on the team make first real contribution. For an outsider from a different business unit to do it remotely is unheard of.
As a data analyst:
You will on the other hand do:
So that doesn’t sound all bad, right? Wrong.
I think a case can be made that the little technical work a data analysts do actually does more harm than good to their data science education. A data scientist and an analyst may be using some of the same tools, but what they do with them is very, very different.
Data analyst’s code | Data Scientist's code |
---|---|
Manually operated sequence of scripts, clicking through GUIs etc. | Fully automated pipelines |
Code that only you will ever see | Code that will be used and maintained by other people |
One-off, throwaway scripts | Code that is a part of an live app or a scheduled pipeline |
Code tweaked until it runs this one time | Code optimised for performance, maintainability and reusability |
Doing things a certain way may make sense from a data analyst’s perspective, but the needs of data science are different. When former analysts are thrown into data science projects and start applying the patterns they have developed through the years, the results are not pretty.
Let me illustrate with an example which I promise is not cherry-picked and a fairly typical in my experience.
I joined a project led by analysts-turned-data scientists. We were building prototype of a pipeline doing some machine learning on the client’s data and displaying pretty plots. One of my first questions when I joined was: how are you getting your data from the client? (we needed a new batch of data at that time). The answer was:
Needless to say, this workflow made it impossible to get anything done whatsoever. To even run the pipeline again on fresher data would take weeks (when it should be seconds) and the results were junk anyway because the technologies they used forced them to only use 1% of available data.
On top of that, every single script in the pipeline was extremely hacky and brittle - and here’s why:
When faced with a task, an analyst would start writing code. If it doesn’t work at first, they add to it and tweak it until it does. As soon as a result is produced (a csv file usually), they move on to the next step. No effort is made to ensure reproducibility, reusability, maintainability, scalability. The piece of code gets you from A to B and that is that. A script made this way is full of hard-coded database passwords, paths to local directories, magic constants and untested assumptions about the input data. It resembles a late-game Jenga tower - weird and misshapen, with many blocks missing and others sticking out in weird directions. It is standing for now but you know that it will come crashing down if you as much as touch it.
The tragic part is that none of the people involved in this mess were dumb. No, they were smart and experienced, just not the right kind of experienced. This spaghetti of manual steps, hacky scripts and big data on old laptops is not the result of not enough cleverness. Way too much cleverness if anything. It’s the result of intelligent people with no experience in making software realising too late that they’re out of their depth.
If only my colleagues were completely non-technical - never having written a SAS or SQL script in their lives - they would have had to hire an engineer to do the coding and they themselves would have focused on preparing the spec. This kind of arrangement is not ideal but I guarantee that the result would have been much better. This is why I believe that the data analyst’s experience is not just useless but actively harmful to data science.
Ultimately though the fault doesn’t lie with the analysts but with the management for mismatching people and tasks. It’s time managers understood that:
In case I wasn’t clear about this: I am emphatically not saying that analysts can’t learn proper software engineering and data science. If miners can do it, so can analysts. It’s just that an analyst’s experience makes it harder for them (and their managers!) to realise that they are missing something and easier to get by without learning a thing.
If you’re an analyst and want to switch to data science (And I’m not saying that you should! The world needs analysts too!) I recommend that you forget everything you have learned about coding and start over, like the miners.
If you’re a grad considering a data analyst role as training for data science I strongly recommend that you find a junior software developer job instead. If you’re lucky, you may get to do some machine learning and graduate into full-on data science. But even if not, practically everything you learn in an entry-level engineering position will make you a better data scientist when you finally become one.
]]>A popular meme places data science at the intersection of hacking, statistics and domain knowledge. It isn’t exactly untrue but it may give an aspiring data scientist the mistaken impression that those three areas are equally important. They’re not.
I’m leaving domain knowledge out of this discussion because, while it’s absolutely necessary to have it to get anything done at all, it usually doesn’t have to be very deep and you’re almost always expected to pick it up on the job.
First of all, hacking is something that we do every day while we can go months or years without touching any statistics. Of course, statistics and probability are baked into much of the software we use but we no more need to think about them daily than a pilot needs to think about the equations of aerodynamics.
Secondly, on those rare occasions when you do come up with some brilliant probabilistic model or business insight, it will still have to be implemented as a piece of software before it creates any value. And make no mistake - it will be implemented by you or not at all. A theoretical data scientist who dictates equations to engineers for implementation is not - and will never be - a thing.
Data science is a subset of software engineering. You design and implement software. It’s a peculiar kind of software and the design process is unusual but ultimately this is what you do. It is imperative that you get good at it.
Your colleagues will cut you a lot of slack with respect to programming on account of you bringing other skillsets to the table. As a result it is entirely possible for someone to be doing data science for years without picking up good engineering practices and modern technologies. Don’t let this happen to you.
The purely technological part of data science - installing things, getting things in and out of databases, version control, db and cluster administration etc. - may seem like a boring chore to you (I know it did to me) - best left to vanilla engineers who are into this stuff. This type of thinking is a mistake. Becoming better at engineering will:
That doesn’t mean that you have to be an expert coder to start working as a data scientist. You don’t even have to be an expert coder to start working as a coder. But you do need to have the basics and be willing to learn.
A trained engineer with no knowledge of statistics is one online course away from being able to perform a majority of data science jobs. A trained statistician with no tech skills won’t be able to do any data science at all. They may still be a useful person to have around (as a data analyst maybe) but would be completely unable to do any data science on their own.
Why do we even have data scientists then? Why aren’t vanilla engineers taking all the data science jobs?
Data science may not require much in terms of hard maths/stats knowledge but it does require that you’re interested in data and models. And most engineers simply aren’t. The good ones are too busy and too successful as it is to put any serious effort into learning something else. And the mediocre simply lack the kind of curiosity that makes someone excited about reinforcement learning or tweaking a shoe reccomender.
Moreover, there is a breed of superstar software engineers doing drive-by data science. I know a few engineers each of whom can run circles around your average data scientist. They can read all the latest papers on a given AI/ML topic, then implement, test and productionise a state of the art recommender/classifier/whatever - all without breaking a sweat - and then move on to non-data related projects where they can make more impact. One well known example of such a person is Erik Bernhardsson - the author of Annoy and Luigi.
These people don’t call themselves ‘data scientists’ because they don’t have to - they already work wherever they want, on whatever projects they want, making lots of money - they don’t need the pretense. No, ‘data scientist’ is a term invented so all the failed scientists - the bored particle physicists and disenchanted neurobiologists - can make themselves look useful to employers.
There is no denying, that
“I’m a data scientist with a strong academic background”
Does sound more employable than
“I’m have wasted 10 best years of my life on theoretical physics but I also took a Python course online, can I have jobs now plz”
I’m being facetious here but of course I do think a smart science grads can be productive data scientists. And they will become immensely more productive if they make sure to steer away from ‘academic with a python course’ and towards ‘software professional who can also do advanced maths’.
]]>Last year I wrote a post about using word embeddings like word2vec or GloVe for text classification. The embeddings in my benchmarks were used in a very crude way - by averaging word vectors for all words in a document and then plugging the result into a Random Forest. Unfortunately, the resulting classifier turned out to be strictly inferior to a good old SVM except in some special circumstances (very few training examples but lots of unlabeled data).
There are of course better ways of utilising word embeddings than averaging the vectors and last month I finally got around to try them. As far as I can tell from a brief survey of arxiv, most state of the art text classifiers use embeddings as inputs to a neural network. But what kind of neural network works best? LSTM? LSTM? CNN? BLSTM with CNN? There are doezens of tutorials on the internet showing how to implement this of that neural classfier and testing it on some dataset. The problem with them is that they usually give metrics without a context. Someone says that their achieved 0.85 accuracy on some dataset. Is that good? Should I be impressed? Is it better than Naive Bayes, SVM? Than other neural architectures? Was it a fluke? Does it work as well on other datasets?
To answer those questions, I implemented several network architectures in Keras and created a benchmark where those algorithms compete with classics like SVM and Naive Bayes. Here it is.
I intend to keep adding new algorithms and dataset to the benchmark as I learn about them. I will update this post when that happens.
All the models in the repository are wrapped in scikit-learn compatible classes with .fit(X, y)
, .predict(X)
, .get_params(recursive)
and with all the layer sizes, dropout rates, n-gram ranges etc. parametrised. The snippets below are simplified for clarity.
Since this was supposed to be a benchmark of classifiers, not of preprocessing methods, all datasets come already tokenised and the classifier is given a list of token ids, not a string.
Naive Bayes comes in two varieties - Bernoulli and Multinomial. We can also use tf-idf weighting or simple counts and we can include n-grams. Since sklearn’s vectorizer expects a string and will be giving it a list of integer token ids, we will have to override the default preprocessor and tokenizer.
1 2 3 4 5 6 7 8 9 10 11 |
|
SVMs are a strong baseline for any text classification task. We can reuse the same vectorizer for this one.
1 2 3 |
|
In other words - a vanilla feed forward neural network. This model doesn’t use word embeddings, the input to the model is a bag of words.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
|
Inputs to this model need to be one-hot encoded, same goes for labels.
1 2 3 4 5 6 |
|
This is where things start to get interesting. The input to this model is not a bag of words but instead a sequence word ids. First thing to do is construct an embedding layer that will translate this sequence into a matrix of d-dimensional vectors.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
Now for the model proper:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
|
This and all the other models using embeddings requires that labels are one-hot encoded and word id sequences are padded to fixed length with zeros:
1 2 3 4 5 |
|
This is the (slightly modified) architecture from Keras tutorial. It’s specifically designed for texts of length 1000, so I only used it for document classification, not for sentence classification.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
This is the architecture from the Yoon Kim’s paper, my implementation is based on Alexander Rakhlin’s. This one doesn’t rely on text being exactly 1000 words long and is better suited for sentences.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
|
Authors of the paper claim that combining BLSTM with CNN gives even better results than using either of them alone. Weirdly, unlike previous 2 models, this one uses 2D convolutions. This means that the receptive fields of neurons run not just across neighbouring words in the text but also across neighbouring coordinates in the embedding vector. This is suspicious because there is no relation between consecutive coordinates in e.g. GloVe embedding which they use. If one neuron learns a pattern involving coordinates 5 and 6, there is no reason to think that the same pattern will generalise to coordinates 22 and 23 - which makes convolution pointless. But what do I know.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
In addition to all those base models, I implemented stacking classifier to combine predictions of all those very different models. I used 2 versions of stacking. One where base models return probabilities, and those are combined by a simple logistic regression. The other, where base models return labels, and XGBoost is used to combine those.
For the document classification benchmark I used all the datasets from here. This includes the 20 Newsgroups, Reuters-21578 and WebKB datasets in all their different versions (stemmed, lemmatised, etc.).
For the sentence classification benchmark I used the movie review polarity dataset and the Stanford sentiment treebank dataset.
Some models were only included in document classification or only in sentence classification - because they either performed terribly on the other or took too long to train. Hyperparameters of the neural models were (somewhat) tuned on one of the datasets before including them in the benchmark. The ratio of training to test examples was 0.7 : 0.3. This split was done 10 times on every dataset and each model was tested 10 time. The tables below show average accuracies across the 10 splits.
Without further ado:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
|
Well, this was underwhelming.
None of the fancy neural networks with embeddings managed to beat Naive Bayes and SVM, at least not consistently. A simple feed forward neural network with a single layer, did better than any other architecture.
I blame my hyperparameters. Didn’t tune them enough. In particular, the number of epochs to train. It was determined once for each model, but different datasets and different splits probably require different settings.
And yet, the neural models are clearly doing something right because adding them to the ensemble and stacking significantly improves accuracy.
When I find out what exactly is the secret sauce that makes the neural models achieve the state of the art accuracies that papers claim they do, I will update my implementations and this post.
]]>“Data science is statistics on a Mac”
Then there is the famous Venn diagram with data science on the intersection of statstics, hacking and substantive expertise.
What the hell?
Based on all those memes one would think that data scientists spend equal amounts of time writing code and writing integrals on whiteboards. Thinking about the right data structure and thinking about the right statistical test. Debugging pipelines and debugging equations.
And yet, I can’t remember a single time when I got to solve an integral on the job (and believe me, it’s not for lack of trying). I spent a total of maybe a week in the last 3 years explicitly thinking about statistical tests. Sure, means and medians and variances come up on a daily basis but it would be setting the bar extremely low to call that ‘doing statistics’.
Someone is bound to comment that I’m doing data science wrong or that I’m not a true data scientist. Maybe. But if true data scientist is someone who does statistics more than 10% of the time, then I’m yet to meet one.
But maybe this is the wrong way to think about it. Maybe my problem is that I was expecting mathematical statistics where I should have been expecting real world statistics.
Mathematical statistics is a branch of mathematics. Data scientists like to pretend they do it, but they don’t.
Real world statistics is an applied science. It’s what people actually do to make sense of datasets. It requires a good intuitive understanding of the basics of mathematical statistics, a lot of common sense and only infrequently any advanced knowledge of mathematical statistics. Data scientists genuinely do it, a lot of the time.
In my defense, it was an easy mistake to make. Mathematical statistics is what is taught in most courses and textbooks. If any statistics questions come up in a job interview for a data science role - it will be the mathematical variety.
To illustrate what I mean by ‘real world statistics’, to show that this discipline is not trivial and is interesting in its own right, I prepared a short quiz. There are 5 questions. None of them require any complicated math or any calculations. They do require a good mathematical intuition though.
I encourage you to try to solve all of them yourself before checking the answers. It’s easy to convince yourself that a problem is trivial after you’ve read the solution! If you get stuck though, every question has a hint below.
According to CDC data, US counties with the lowest incidence of kidney cancer happen to all be rural, sparsely populated and located in traditionally Republican states. Can you explain this fact? What does it tell us about the causes of cancer?
According to a series of interviews conducted with people who have been in a bar fight, 9 out of 10 times, when someone dies in a bar fight, he was the one who started it. How can you explain this remarkable finding?
After Google measured on-the-job performance of their programmers, they found a negative correlation between being a former winner of a programming competition and being successful on the job. That’s right - champion coders did worse on average. That raises important questions for recruiters. Do programming competitions make you worse at 9-5 programming? Should employers start screening out champion coders?
It is well documented that students from underprivileged backgrounds underperform academically at all levels of education. Two students enter a university - one of them comes from an underprivileged group, the other from a privileged one. They both scored exactly the same on the university admission exam. Should you expect the underprivileged student to do better, the same or worse in the next university exam compared to the other student? Bear in mind that while their numerical scores from the admissions test were the same, it means that the underprivileged student heavily outperformed expectations based on his/her background while the other student did as well as expected from his/her background.
According to studies the average number of sex partners Britons have had in their lives is 9.3 for men and 4.7 for women. How can those numbers possibly be different? After all, each time a man and a woman become sex partners, they increase the total sex partners tally for both sexes by +1. Find at least 5 different factors that could (at least in theory) account for the difference and rate them for plausibility.
It tells us nothing about causes of cancer, it’s a purely statistical effect and it has to be this way. Sparsely populated counties have less people in them, so sampling error is higher. That’s it. Think about an extreme case - a county with a population of 1. If the only inhabitant of this county gets kidney cancer, the county will have 100% kidney cancer rate! If this person remains healthy instead, the county will have cancer incidence rate of 0%. It’s easier for a small group of people to have extremely high or extremely low rate of anything just by chance. Needless to say, republicanism has nothing to do with cancer (as far as we know) - it’s just that rural areas are both sparsely populated and tend to lean Republican.
This example comes from Daniel Kahneman’s awesome book Thinking Fast And Slow. This blog post has a really nice visualisation of the actual CDC data that illustrates this effect.
People lie. Of course the dead one will be blamed for everything!
This one is slightly more subtle. It is not inconceivable that being a Programming Competition Winner (PCW) makes one less likely to be a Good Engineer (GE). But this is not the only and IMO not the most plausible explanation of the data. It could very well be that in the general population there is no correlation between GE and PCW or a positive correlation and the observed negative correlation is purely due to Google’s hiring practices. Imagine a fictional hiring policy where Google only hires people who either won a competition (PCW) or are proven superstar engineers (GE) - based on their open source record. In that scenario any engineer working at Google who was not a PCW would automatically be GE - hence a negative correlation between GE and PCW among googlers. The correlation in the subpopulation of googlers may very well be the opposite of the correlation in the entire population. Treating PCW as a negative in hiring programmers would be premature.
Erik Bernhardsson has a post with nice visual illustration of this phenomenon (which is an of Berkson’s Paradox). The same principle also explains why all handsome men you date turn out to be such jerks.
The underprivileged student should be expected to do worse. The reason is simple - the admissions test is a measure of ability but it’s not a perfect measure. Sometimes students score lower or higher than their average just by chance. When a student scores higher/lower than expected (based on demographics and whatever other information you have) it is likely that the student was lucky/unlucky in this particular test. The best estimate of the student’s ability lies somewhere between the actual test score and our prior estimate (which here is based on the demographics).
To convince yourself that it must be so, consider an example from sports. If a third league football team like Luton Town plays a top club like Real Madrid and ties, you don’t conclude that Luton Town is as good as Real Madrid. You raise your opinion of Luton Town and lower your opinion of Real Madrid but not all the way to the same level. You still expect Real Madrid to win the rematch.
This effect is an example of regression to the mean and it is known as Kelley’s Paradox. This paper illustrates it with figures with actual data from SAT and MCAT exams. You will see that the effect is not subtle!
Average number of sex partners for males is the sum of the numbers of sex partners for all the males divided by the number of all the males:
similarly for females:
The reason we think $MSP$ and $FSP$ should be equal is that every time a man and a woman become sex partners, the numerators of both $MSP$ and $FSP$ increase by $+1$. And the denominators are approximately equal too. Let’s list all the ways this tautology breaks down in real life:
And there are other factors as well, although it’s not clear to me which way would they tend to bias the ratio:
My good friend Nadbor told me that he found on Reddit someone asking if data scientists end up doing boring tasks such as classifying shoes. As someone that has faced this problem in the past, I was committed to show that classifying shoes it is a challenging, entertaining task. Maybe the person who wrote that would find it more interesting if the objects to classify were space rockets, but whether rockets or shoes, the problem is of the same nature.
Imagine that you work at a fashion aggregator, and every day you receive hundreds of shoes in the daily feed. The retailers send you one identifier and multiple images (with different points of view) per shoe model. Sometimes, they send you additional information indicating whether one of the images is the default image to be displayed at the site, normally, the side-view of the shoe. However, this is not always the case. Of course, you want your website to look perfect, and you want to consistently show the same shoe perspective across the entire site. Therefore, here is the task: how do we find the side view of the shoes as they come through the feed?
Before I jump into the technical aspect of the solution, let me just add a few lines on team-work. Through the years in both real science and data science, I have learned that cool things don’t happen in isolation. The solution that I am describing here was part of a team effort and the process was very entertaining and rewarding.
Let’s go into the details.
The solution implemented comprised two steps:
1-. Using the shape context algorithm to parameterise shoe-shapes
2-. Cluster the shapes and find those clusters that are comprised mostly by side-view shoes
Details on the algorithm can be found here and additional information on our python implementation is here. The steps required are mainly two:
1-. Find points along the silhouette of the shoe useful to define the shape.
2-. Compute a Shape Context Matrix using radial and angular metrics that will effectively parameterise the shape of the shoe.
Finding the relevant points to be used later to compute the Shape Context Matrix is relatively easy. If the background of the image is white, simply “slice” the image and find the initial and final points that are not background per slice. Note that due to the “convoluted” shapes of some shoes, techniques relying on contours might not work here.
I have coded a series of functions to make our lives easier. Here I show the results of using some of those functions.
The figure shows 60 points of interest found as we move along the image horizontally.
Once we have the points of interest we can compute the radial and angular metrics that will eventually lead to the Shape Context Matrix. The idea is the following: for a given point, compute the number of points that fall within a radial bin and an angular bin relative to that point.
In a first instance, we computed 2 matrices, one containing radial information and one containing angular information, per point of interest. For example, if we select 120 points of interest around the silhouette of the shoe, these matrices will be of dim (120,120).
Once we have these matrices, the next step consists in building the shape context matrix per point of interest. Eventually, all shape context matrices are flattened and concatenated resulting in what is referred to as Bin Histogram.
Let’s have a look at one of these shape context matrices. For this particular example we used 6 radial bins and 12 angular bins. Code to generate this plot can be found here:
This figure has been generated for the first point within our points-of-interest-array and is interpreted as follows: if we concentrate on the upper-left “bucket” we find that, relative to the first point in our array, there are 34 other points that fall within the largest radial bin (labelled 0 in the Figure) and within the first angular bin (labelled 0 in the Figure). More details on the interpretation can be found here
Once we have a matrix like the one in Figure 2 for every point of interest, we flatten and concatenate them resulting in an array of 12 $\times$ 6 $\times$ number of points (120 in this case), i.e. 8640 values. Overall, after all this process we will end up with a numpy
array of dimensions (number of images, 8640). Now we just need to cluster these arrays.
A detailed discussion on how to pick the number of clusters and the potential caveats can be found here. In this post I will simply show the results of using MiniBatchKMeans to cluster the arrays using 15 clusters. For example, clusters 2,3 and 10 look like this.
Interestingly cluster 1 is comprised of images with an non-white and/or structured background, images with a shape different than that of a shoe and some misclassifications. Some advise on how to deal with the images in that cluster can be found here
There are still a few aspects to cover to isolate the side views of the shoes with more certainty, but I will leave this for a future post (if I have the time!).
In addition, there are some other features and techniques one could try to improve the quality of the clusters, such as GIST indicators or Halarick Textural Features.
Of course, if you have the budget, you can always pay for someone to label the entire dataset, turn this into a supervised problem and use Deep Learning. A series of convolutional layers should capture shapes, colours and patterns. Nonetheless, if you think for a second about the nature of this problem, you will see that even deciding the labelling is not a trivial task.
Anyway, for now, I will leave it here!
The code for the process described in this post can be found here
]]>I’m interviewing data engineering contractors recently. All of the candidates are very senior people with 10+ years of experience. My go to question:
Me: What data structure would you use (in your favorite programming language) to store a large number (let’s say 100k) of strings - so they can be looked up efficiently? And by ‘looked up’ I mean - user will come up with a new string (‘banana’) and you have to quickly tell if this string is an element of your collection of 100k?
Candidate: I would load them in an RDD and then…
Me: No, no, I’m not asking about Spark. This is a regular single-threaded, in-memory, computer science 101 problem. What is the simplest thing that you could do?
Candidate: Grep. I would use grep to look for the string.
Me: Terrific. Sorry, maybe I wasn’t clear, I’m NOT talking about finding a substring in a larger text… You know what, forget about the strings. There are no strings. You have 100k integers. What data structure would you put them in so you can quickly look up if a new integer (1743) belongs to the collection?
Candidate: For integers I would use an array.
Me: And how do you find out if the new integer belongs to this array?
Candidate: There is a method ‘contains’.
Me: Ok. And for an array of n integers, what is the expected running time of this method in terms of n?
Candidate: …
Me: …
Candidate: I think it would be under one minute.
Me: Indeed.
This one was particularly funny, but otherwise unexceptional. This week I interviewed 4 people and not a single one of them mentioned hash tables. I would have also accepted ‘HashMap’, ‘Map’, ‘Set’, ‘dictionary’, ‘python curly braces’ - anything pointing in vaguely the right direction, even if they didn’t understand the implementation. Instead I only got ‘a vector, because they are thread safe’, ‘ArrayList because they are extensible’, ‘a List because lists in scala are something something’, ‘in my company we always use Sequences’. Again: these are very experienced people who are being paid a lot of money contracting for corporations in London and who can very convincingly bullshit about their Kafkas, Sparks, HBases and all the other Big Data nonsense.
Another bizarre conversation occurred when a candidate with 16 years of experience with Java (confirmed by the Sun certificate) immediately came up with the idea of putting the strings in buckets based on their hash and started explaining to me basically how to implement a hash table in Java, complete with the discussion of the merits of different string hashing functions. When I suggested that maybe Java already has a collection type that does all of this he reacted with indignation - he shouldn’t have to know this, you can find out on the internet. Fair enough, but one would think that after 16 years of programming in that language someone would have encountered HashMaps once or twice… This seemed odd enough that for my next question I went off script:
Me: Can you tell me what is the signature of the main method in Java?
Candidate: What?
Me: Signature of the main method. Like, if you’re writing the ‘hello world’ program in Java, what would you have to type?
Candidate: class HelloWorld
Me: Go on.
Candidate: int main() or void main() I think
Me: And the parameters?
Candidate: Yes, I remember, there are command line parameters.
Me: …
Candidate: Two parameters and the second is an integer.
Me: Thank you, I know all I wanted to know.
Moral of this story?
Come to London, be a data engineering contractor and make £500/day. You can read about Java on wikipedia, put 15 years of experience on your resume and no one will be the wiser.
]]>Which should you use and when? Which should you learn first? Is type safety more important than flexibility? Is Python fast enough for performance-heavy applications? Is Scala’s machine learning ecosystem mature enough for serious data science? Are indents better than braces?
This post won’t answer any of those questions.
I will show how to solve a related problem though. Given the following text, which was stitched together from bits of scikit-learn and scalaz code files, can you tell where does Python end and Scala begin?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 |
|
I will show how Keras LSTMs and bidirectional LSTMs can be used to neatly solve this problem. The post will contain a some snippets of code but the full thing is here.
I once interviewed with a cyber security company that was scraping the web looking for people’s phone numbers, emails, credit card numbers etc. They asked me how I would go about building a model that finds those things in text files and also categorizes the files into types like ‘email’, ‘server logs’, ‘code’, etc.
The boring answer is that with enough feature engineering you could classify files pretty well with any old ML algorithm. If all lines have a common prefix -
1 2 3 4 5 6 |
|
- then we’re probably dealing with a log file. If we’re there’s a lot of camelCase()
- that means we’re seeing code. And so on.
Finding e.g. phone numbers in text is more involved but still doable this way. You would have to first generate potential potential matches using regular expressions and then classify each as a true or spurious based on the context it appears in.
Inevitably, for every new file type and every type of entity to be found in the file, one would have to come up with new features and maybe train a separate classifier.
Super tedious.
The fun and potentially superior solution uses char-RNNs. Instead of all those handcrafted features and regular expressions and different models, we can train a single recurrent neural network to label each character in the text as either belonging to a phone number (credit card number, email …) or not. If we do it right and have enough training data, the network should be able to learn that phone numbers are more likely to occur in emails than in server logs and that Java code tends to use camel case while Python has indented blocks following a colon - and all kinds of other features that would otherwise have to be hardcoded.
Let’s do it!
As it turned out, the hardest part was getting and preparing the data. Since I don’t have access to a labeled dataset with phone numbers and emails, I decided to create an artificial one. I took all the Python files from scikit-learn repository and all the Scala files from scalaz and spliced them together into one giant sequence of characters. The sequence takes a few dozen consecutive characters from a Python file, then a few dozen from a Scala file, then Python again and so on. The result is the Frankenstein’s monster at the top of the post (except tens of megabytes more of it).
The sequence made up of all the Python and Scala files wouldn’t fit in my RAM (Big Data, as promised ;), so it is generated online during training, using a generator:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
|
The other reason for using a generator is that the sequence can be randomized (both the order of files and the number of consecutive characters taken from one source). This way the network will never see the same sequence twice which will reduce overfitting.
Next step is encoding the characters as vectors (one-hot-encoding):
1 2 3 4 5 6 7 8 9 10 |
|
To take advantage of the parallel processing powers of the GPU, the input vectors need to be shaped into batches. Keras requires that batches for LSTM be 3-dimensional arrays, where first dimension corresponds to the number of samples in a batch, second - number of characters in a sequence and third - dimensionality of the input vector. The latter is in our case equal to the number of characters in our alphabet.
For example, if there were only two sequences to encode, both of length 4, and only 3 letters in the alphabet, this is how we would construct a batch:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
If the sequences are too long to fit in one batch - as they are in our case - they need to be split into multiple batches. This would ordinarily mean losing some context information for characters that are near the boundary of a sequence chunk. Fortunately Keras LSTM has a setting stateful=True
which tells the network that the sequences from one batch are continued in the next one. For this to work, the batches must be prepared in a specific way, with n-th sequence in a batch being continued in the n-th sequence of the next batch.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
In our case, each sequence is produced by a generator reading from files. We will have to start a number of generators equal to the desired batch size.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
Done. This generator produces batches accepted by Keras’ LSTM. batch_size
and sequence_len
settings influence GPU/CPU utilisation but otherwise shouldn’t make any difference (as long as stateful=True
!).
Now for the easy part. Construct the network:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
|
And train it:
1 2 3 4 5 6 |
|
Making predictions is just as easy:
1
|
|
That’s it! The full code I used has a few more bells and whistles, but this is the core of it.
I have split the Python and Scala files into train and test sets (80:20) and trained the network on the training set for a few hours. This is what the network’s prediction on the test set (same text as on top of of this post) looks like:
package scalaz
package syntax
"""
Extended math utilities.
"""
# Authors: Gael Varoquaux
# Alex/** Wraps a value `selfandre Gramfort
# Alexandre T. Passos
# Olivier Grisel
# Lars Buitinck
# Stefan van der Walt
# Kyle Kastner
# Giorgio Patrini
# License:` and provides methods related to `MonadPlus` */
final class MonadPlusOps[F[_],A] private[syntax](val self: BSD 3 clause
from __future__ import division
from functools import partial
import warnings
import numpy as np
from scipy import linalg
from scipy.sparse import issparse, csr_matr F[A])(implicit val F: MonadPlus[F]) extends Ops[F[A]] {
////
impoix
from . import check_random_state
from .fixrt Leibniz.===
def filter(f: A => Boolean): F[A] =
F.filter(self)(f)
def withFilter(f: A => Boolean): F[A] =
filter(f)
final def uniteU[T](implicit T: Unapply[Foldable, Aes import np_version
from ._logistic_sigmoid import _log_logistic_sigmoid
from ..extern]): F[T.A] =
F.uniteU(self)(T)
def unite[T[_], B](implicit ev: A === T[B], T: Foldable[T]): F[B] = {
val ftb: F[T[B]] = ev.subst(seals.six.moves import xrange
from .sparsefuncs_fast import csr_row_norms
from .validation import check_array
from ..exceptions import NonBLASDotWarning
lf)
F.unite[T, B](ftb)
}
final def lefts[G[_, _], B, C](implicit ev: A === G[B, C], G: Bifoldable[G]): F[B] =
F.lefts(ev.subst(self))
final def rigdef norm(x):
"""Compute the Euclidean or Frobenius norm of x.
hts[G[_, _], B, C](implicit ev: A === G[B, C], G: Bifoldable[G]): F[C] =
F.rights(ev.subst(self))
final def separate[G[_, _], Returns the Euclidean norm when x is a vector, the Frobenius norm when x
is a matrix (2-d array). More precise than sqrt(squared_norm(x)).
"""
x = np.asarray(x)
nrm2, = lin B, C](implicit ev: A === G[B, C], G: Bifoldable[G]): (F[B], F[C]) =
F.separate(ev.subst(self))
////
}
sealed trait ToMonadPlusOps0 {
implicit def Talg.get_blas_funcs(['nrm2'], [x])
return nrm2(x)
# Newer NumPy has a ravel that needs leoMonadPlusOpsUnapply[FA](v: FA)(implicit F0: Unapply[MonadPlus, FA]) =
new MonadPlusOps[F0.M,F0.A](F0(v))ss copying.
if np_version < (1, 7, 1):
_ravel = np.ravel
else:
_ravel = partial(np.ravel, order='K')
def squared_no(F0.TC)
}
trait ToMonadPlusOps extends ToMonadPlusOps0 with ToMonadOps with ToApplicatrm(x):
"""Squared Euclidean or Frobenius norm of x.
Returns the Euclidean norm when x is a vector, the Frobenius norm when x
is a matrix (2-d array). Faster than norm(ivePlusOps {
implicit def ToMonadPlusOps[F[_],A](v: F[A])(implicit F0: MonadPlus[F]) =
new MonadPlusOps[F,A](v)
////
////
}
trait MonadPlusSyntax[F[_]] extends MonadSyntax[F] withx) ** 2.
"""
x = _ravel(x)
if np.issubdtype(x.dtype, np.integer):
ApplicativePlusSyntax[F] {
implicit def ToMonadPlusOps[A](v: F[A]): MonadPlusOps[F, A] = ne warnings.warn('Array type is integer, np.dot may overflow. '
'Data should be float type to avoid this issue',
UserWarning)
return np.dot(xw MonadPlusOps[F,A](v)(MonadPlusSyntax.this.F)
def F: MonadPlus[F]
////
////
}
package scalaz
package syntax
/** Wraps a value `self` and provides methods, x)
def row_norms(X, squared=False):
"""Row-wise (squared) Euclidean norm of X.
E related to `Traverse` */
final class Tquivalent to np.sqrt((X * X).sum(axis=1)), but also supporaverseOps[F[_],A] private[syntax](val self: F[A])(implicit val F: Traverse[F]) exterts sparse
matrices and does not create an X.shape-sized temporary.
Performs no input valnds Ops[F[A]] {
////
import Leibniz.===
final def tmap[B](f: A => B): F[B] =
F.map(seidation.
"""
if issparse(X):
if not isinstance(X, csr_matrix):
Font size shows the true label (small - Python, big - Scala) and background color represents the network’s prediction (white - Python, dark red - Scala).
It’s pretty good overall, but network keeps making a few unforced errors. Consider this bit:
package scalaz
package syntax
"""
package scalaz
should be a dead giveaway, the prediction only becomes confident at about the character ‘g’""""
following a stretch of Scala code. Triple quotes should immediately be labeled as Python but only the third one is.These mistakes stem from the fact that the RNN doesn’t look ahead and can only interpret a character in the context of characters that came before. Triple quotes almost certainly come from a stretch of Python code, but you don’t know that you’re seeing triple quotes until you’ve seen all three. That’s why the prediction gradually changes from Scala to Python (red to white) as the RNN encounters the second and third consecutive quote.
This problem actually has a straightforward solution - bidirectional RNN. It’s a type of RNN where the sequence is fed to it from both ends at the same time. This way, the network will be aware of the second and third quotation marks already when it’s producing the label for the first one.
To make the LSTM bidirectional in Keras one needs simply to wrap it with the Bidirectional
wrapper:
1 2 3 4 5 6 |
|
Everything else stays the same.
Here’s a sample of results from a bidirectional LSTM:
package scalaz
package std
import std.AllInstances._
import scalaz.scalacheck.ScalazProperties._
import scalaz.scalac"""
===============================heck.ScalazArbitrary._
import org.scalacheck.{Gen, Arbitrary}
import Lens.{lens => _, _}
import org.scalacheck.Prop.fo=========
Comparison of Calibration of ClassifrAll
object LensTest extends SpecLite {
{
implicit def lensArb = Arbitrary(Gen.const(Lens.lensId[Int]))
implicit def lensEqual = new Equal[Lens[Int, Iiers
========================================
Well calibrated classifiers are probabint]] {
def equal(a1: Lens[Int, Int], a2: Lens[Int, Int]): Boolean = a1.get(0) == a2.get(0)
}
checkAll("Lens", category.laws[Lens]) // not really testing much!
}
checkAll("id",listic classifiers for which the output
of the predict_proba method can be directly interpreted as a confidence level.
For instance a well calibrated (binary) classifier should classify the samp lens.laws(Lens.lensId[Int]))
checkAll("trivialles
such that among the samples to which it gave a predict_proba", lens.laws(Lens.trivialLens[Int]))
checkAll("codiagLens", lens.laws(Lens.codiagLens[Int]))
checkAll("Tuple2.first", lens.laws(Lens.firstLens[Int, Int]))
checkAll("Tuple2.second", le value close to
0.8, approx. 80% actually belong to the positive class.
Logisticns.laws(Lens.secondLens[Int, Int]))
checkAll("Set.containRegression returns well calibrated predictions as it directly
os", lens.laws(Lens.lensId[Set[Int]].contains(0)))
checkAll("Map.member", lens.laws(Lens.lensId[Map[Boolean, Int]].ptimizes log-loss. In contrast, the othemember(true)))
checkAll("sum", lens.laws(Lens.firsr methods return biased probabilities,
with different biases per method:
* GaussianNaiveBayes tends to push probabilities to 0 otLens[Int, String].sum(Lens.firstLens[Int, String])))
"NumericLens" should {
"+=" ! forAll((i: Int) => (Lens.lensId[Int] += i).run(1) must_=== ((i + 1) -> (i +
I think this looks better overall. The problem of updating prediction too slowly is mostly gone - package scalaz
is marked as Scala immediately, starting with the letter ‘p’. However, now the network started making weird mistakes in the middle of a word for no reason. Like this one:
Comparison of Calibration
Why is the middle of the ‘Calibration’ all of a sudden marked as Scala?
The culprit is statefulness. Remember that stateful=True
means that for each sequence in a batch, the state of the network at the beginning of a sequence is reused from the state at the end of the previous sequence*. This acts as if there were no batches, just one unending sequence. But in a bidirectional layer the sequence is fed to the network twice, from both directions. So half of the state should be borrowed from the previous sequence, and half from the next sequence that has not been seen yet! In reality all of the state is reused from previous sequence, so half of the network ends up in the wrong state. This is why those weird mispredictions appear and appear at regular intervals. At the beginning of a new batch, half of the network is in the wrong state and starts predicting the wrong label.
* or more precisely, the state at the end of the corresponding sequence in the previous batch
Let’s get rid of statefulness in the bidirectional version of the network:
1
|
|
Unfortunately this means that we will have to use longer sequences (in the previous experiments I used 128 characters, now 200) to give the network more context for labeling a character. And even with that, prediction for characters near the boundary between consecutive sequences is bound to be poorer - like in regular unidirectional LSTM. To make up for it I decided to give the network more layers (4) and more time to train (a day). Let’s see how it worked out:
package scalaz
import scalaz.syntax.equal._
import scalaz.syntax.show._
sealed abstract class Either3[+A, +B, +C] extends Pro"""Bayesian Gaussian Mixture Modduct with Serializable {
def fold[Z](left: A => Z, middle: B => Z, right: C => Z): Z = this match {
case Left3(a) => left(a)
caseel."""
# Author: Wei Xue <xuewei4d Middle3(b) => middle(b)
case Right3(c) => right(c)
}
def eitherLeft: (A \/ B) \/ C = this match {
case Left3(a) => -\@gmail.com>
# Thierry Guillemot <thierry.guillemot.work@gmail.com>
# License: BSD 3 clause
import math
import numpy as np
from scipy.special import betaln, digamma, /(-\/(a))
case Middle3(b) => -\/(\/-(b))
case Right3(c) => \/-(c)
}
gammaln
from .base import BaseMixture, _check_shape
from .gaussian_mixture import _check_precision_matrix
from .gaussian_mixture import _check_precision_positivity
from .gaus def eitherRight: A \/ (B \/ C) = this match {
case Left3(a) => -\/(a)
case Middle3(b) => \/-(-\/(b))
case Right3(c)sian_mixture import _compute_log_det_cholesky
from .gaussian_mixture import _compute_precision_cholesky
from .gaussian_mixture import _estimate_gaussian_p => \/-(\/-(c))
}
def leftOr[Z](z: => Z)(f: A => Z): Z = fold(f, _ => z, _ => z)
def middleOr[Z](zarameters
from .gaussian_mixture import _estimate_log_gaussian_prob
from ..utils import check_array
from ..utils.validation import check_is_fitted
def _log_dirichlet_norm(dirichlet_concentration: => Z)(f: B => Z): Z = fold(_ => z, f, _ => z)
def rightOr[Z](z: => Z)(f: C => Z): Z = fold(_ => z, _ => z, f)
}
final case class Left3[+A, +B, +C](a: A) extends Either3[A, B, C]
final case cla):
"""Compute the log of the Dirichlet distribution normalization term.
Parameters
----------
dirichletss Middle3[+A, +B, +C](b: B) extend_concentration : array-like, shape (n_samples,)
The s Either3[A, B, C]
final case class Right3[+A, +B, +C](c: parameters values of the Dirichlet distribution.
Returns
-------
log_dirichlet_norm : float
The log normalization of the DirichleC) extends Either3[A, B, C]
object Either3 {
def left3[A, B, C](a: A): Either3[A, B, C] = Left3(a)
def middle3[A, B, C](b: B)t distribution.
"""
return (gammaln(np.sum(dirichlet_concentration)) -
np.sum(gammaln(dirichlet_concentration)))
def _log_wishart_norm(degrees_o: Either3[A, B, C] = Middle3(b)
def right3[A, B, C](c: C): Either3[A, B, C] = Right3(c)
implicit def equal[A: Equal, B: Equal, C: Equalf_freedom, log_det_precisions_chol, n_features):
"""Compute the log of the Wishart distribution normalization term.
Parameters
----------
degrees_of_freedom : array-like, shape ]: Equal[Either3[A, B, C]] = new Equal[Either3[A, B, C]] {
def equal(e1: Either3[A, B, C], e2: Either3[A, B, C]) = (e1, e2) match {
case (Left3(a1)(n_components,)
The number of degrees of freedom on t, Left3(a2)) => a1 === a2
case (Middle3(b1), Middle3(b2)) => b1 === b2
case (Right3(c1), Right3(c2)) => c1 === c2
case _ => false
}
}
implicihe covariance Wishart
t def show[A: Show, B: Show, C: Show]: Show[Either3[A, B, C]] = ne distributions.
log_det_precision_chol : array-like, shapw Show[Either3[A, B, C]] {
override def show(v: Either3[A, B, C]) = v match {
case Left3(a) => Cord("Left3(", a.shows, e (n_components,)
The determinant of the precision matrix for each component.
n_feat")")
case Middle3(b) => Cord("Middle3(", b.shows, ")")
case Right3(c) => Cord("Right3(", c.shows, ")")
}
}
}
// vim: set ts=4 sw=4 et:
package scalaz
package syntures : int
The number of features.
Return
------
log_wishart_norm : array-like, shape (n_components,)
The log noax
/** Wraps a value `self` and provides methods related to `Unzip` */
final class UnzipOps[F[_],A] private[syntax](val self: F[A])(implicit val F: Unzip[F]) extends Ops[F[Armalization of the Wishart distribution.
"""
# To simplify the comp]] {
////
////
}
sealed trait ToUnzipOps0 {
implicit def ToUnzipOpsUnapply[FA](v: FA)(implicit F0: Unapply[Unzip, FA])utation we have removed the np.log(np.pi) term
return -(degrees_of_freedom * log_det_precisi =
new UnzipOps[F0.M,F0.A](F0(v))(F0.TC)
}
trait ToUnzipOps extends ToUnzipOps0 {
implicit def ToUnzipOps[F[_],A](v: Fons_chol +
degrees_of_freedom * n_features * .5 * math.log(2.) +
np.sum(gammaln(.5 * (degrees_of_freedom -
[A])(implicit F0: Unzip[F]) =
new UnzipOps[F,A](v)
////
implicit def ToUnzipPairOps[F[_],A,B](v: F[(A, B)])(implicit F0: Unzip[F]) =
new UnzipPairOps[F,A,B](v)(F0)
final c np.arange(n_features)[:, np.newaxis])), 0))
class BayesianGaussianMixture(BaseMixlass UnzipPairOps[F[_],A, B] private[syntax](self: F[(A, B)])(imture):
"""Variational Bayesian estimation of a Gaussian mixt
Weird mislabelings are gone, boundaries between labels are crisp, overall accuracy improved. It’s practically perfect. Thank you François Chollet!
This is it for now. More experiments in the next post.
As a bonus, this is a prediction from a network trained collected works of Shakespeare mixed with .R files from caret repository:
SCENE III.
CYMBELINE'S palace. An ante-chamber adjoining IMOGEN'S apartments
Enter CLtimestamp <- Sys.time()
library(caret)
model <- "nbSearch"
######################################OTEN and LORDS
FIRST LORD. Your lordship is the most patient man in loss, the most
coldest that ever turn'd up ac###################################
set.seed(2)
training <- LPH07_1(100, factors = TRUE, class = TRUE)
testing <- LPH07_1(100, factors = TRUE, class = TRUE)
trainX <- training[, -ncol(te.
CLOTEN. It would make any man cold to lose.
FIRST LORD. But not every man paraining)]
trainY <- training$Class
cctrl1 <- trainControl(method = "cv",tient after the noble temper of
your lordship. You are most hot and furious when you win.
CLOTEN. Winning will put any man into courage. If I cou number = 3, returnResld get this
foolish Imogen, I should have gold enough. It's almost morning,
is't not?
FIRST LORD. Day, my lord.
CLamp = "all",
classProbs = TRUE,
summaryFunction = twoClassSummary)
cctrl2 <OTEN. I would this music would come- trainControl(method = "LOOCV",
classProbs = TRUE, summaryFunction = twoClassSummary)
cctrl3 <- trainControl(method = ". I am advised to give her
music a mornings; they say it will penetrate.
Enter musicians
Come on, tune. If you none",
classcan penetrate her with your fingering, so.
We'll try with tongue too. If none wilProbs = TRUE, summaryFunction = twoClassSummary)
cctrlR <l do, let her remain; but
I'll nev- trainControl(method = "cv", number = 3, returnResamp = "all", search = "random")
set.seed(849)
test_class_cv_model <- train(trainX, trainY,
er give o'er. First, a very excellent good-conceited
thing; after, a wonderful sweet air, with admirable rich words to
it- and then let her consider.
SONG
Hark, har method = "nbSearch",
k! the lark at heaven's gate sings,
And Phoebus 'gins arise,
His steeds to water at those springs
On chalic'd flow'rs that lies;
And winking Ma trControl = ccry-buds begin
To ope their golden eyes.
With everything that pretty bin,
My lady sweet, arise;
Arise, arise!
So, get you gone. If this penetrate, I trl1,
metric = "ROC")
test_class_pred <- predict(test_class_cv_model, testing[, -ncol(testing)])
test_class_prob <- predict(test_classwill consider your music
the better; if it do not, it is a vice in her ears which
horsehairs and calves' guts, nor the voice of unpaved eunuch to
boot, can_cv_model, testing[, -ncol(testing)], type = "prob")
never amend. Exeunt musicians
Enter CYMBELINE and QUEEN
SECOND LORD. Here comes the King.
CLOTEN. I am glad I was up so late, for that's the re
set.seed(849)
test_class_rand <- trainason I was up
so early. He cannot choose but take this service I hav(trainX, trainY,
method = "nbSearch",
trControl = cctrlR,
e done
fatherly.- Good morrow to your Majesty and to my gracious mother.
CYMBELINE. Attend you here the door of our stern daughter?
Will she no tuneLength = 4)
set.seed(849)
test_class_loo_model <- train(trainX, trainY,
method = "nbt forth?
CLOTEN. I have assail'd her with musics, but she vouchsafes no
notice.
CYMBELINE.Search",
What have we learned?
Neural networks are in many ways overhyped. On most supervised machine learning problems you would be better off with a good old random forest. But tagging sequences is one of those applications that are difficult and tedious to even translate into a regular supervised learning task. How do you feed a stream of characters into a decision tree? And an RNN solves it straight out of the box, no questions asked.
]]>In the previous posts I showed the imputation of boolean missing data, but the same method works for categorical features of any cardinality as well as continuous ones (except in the continues case additional prior knowledge is required to specify the likelihood). Nevertheless, I decided to test the imputers on purely boolean datasets because it makes the scores easy to interpret and the models quick to train.
To make it really easy on the Bayesian imputer, I created a few artificial datasets by the following process:
With data generated by the same Bayesian network that we will fit to it, we’re making it as easy on pymc as possible to get a good score. Mathematically speaking, the bayesian model is the way to do it. Anything less than optimal performance can only be due to a bug or pymc underfitting (perhaps from too few iterations).
The first dataset used is based on the famous wet sidewalk - rain - sprinkler network as seen in the wikipedia article on Bayesian networks.
The second, bigger, is based on the LUCAS network
And the biggest one is based on an example from some ML lecture notes
For each of these networks I would generate a dataframe with 10, 50, 250, 1250 or 6250 records and drop (replace with -1
) a random subset of 20% or 50% of values in each column. Then I would try to fill them in with each model and score the model on accuracy. This was repeated 5 times for each network and data size and the accuracy reported is the mean of the 5 tries.
The following models were used to impute the missing records:
most frequent
- a dummy model that predicts most frequent value per dataframe column. This is the absolute baseline of imputer performance, every model should be at least as good as this.xgboost
- a more ambitious machine learning-based baseline. This imputer simply trains an XGBoost Classifier for every column of the dataframe. The classifier is trained only on the records where the value of that column is not missing and it uses all the remaining columns to predict that one. So, if there are n columns - n classifiers are trained, each using n - 1 remaining columns as features.MAP fmin_powell
- a model constructed the same way as the DuckImputer
model from the previous post. Actually, it’s a different model for each dataset, but the principle is the same. You take the very same Bayesian network that was used to create the dataset and fit it to the dataset. Then you predict the missing values using MAP with ‘method’ parameter set to ‘fmin_powell’.MAP fmin
- same as above, only with ‘method’ set to ‘fmin’. This one actually performed so poorly, (no better than random and worse than most frequent
) and was so slow that I quickly dropped it from the benchmarkMCMC 500
, MCMC 2000
, MCMC 10000
- same as the MAP
models, except for the last step. Instead of finding maximum a posteriori for each variable directly using the MAP function, the variable is sampled n times from the posterior using MCMC, and the empirically most common value is used as the prediction. Three versions of this model were used - with 500, 2000 and 10000 iterations for burn-in repectively. After burn-in, 200 samples were used each time.Let’s start with the simplest network:
Rain-Sprinkler-Wet Sidewalk benchmark (20% missing values). Mean imputation accuracy from 5 runs vs data size.
Average fitting time in seconds. Beware log scale!
XGBoost comes out on top, bayesian models do poorly, apparently worse than even most frequent
imputer. But variance in scores is quite big and there is not much structure in this dataset anyway, so let’s not lose hope. MAP fmin_powell
is particularly bad and terribly slow on top of that, dropping it from further benchmarks.
Let’s try a wider dataset - the cancer network. This one has more structure - that the bayesian network knows up front and xgboost doesn’t - which should give bayesian models an edge.
Cancer network imputation accuracy. 20% missing values
Cancer network imputation time.
That’s more like it! MCMC wins when records are few, but deteriorates when data gets bigger. MCMC models continue to be horribly slow.
And finally, the biggest (27 features!), car insurance network.
Car insurance network imputation accuracy. 20% missing values
Car insurance network imputation time.
Qualitatively same as the cancer network case. It’s worth pointing out that in this case, the Bayesian models achieve at 50 records a level of accuracy that XGBoost doesn’t match until shown more than a thousand records! Still super slow though.
What have we learned?
Overall, I count this experiment as a successful proof of concept, but of very limited usefulness in its current form. For any real world application one would have to redo it using some other technology. pymc is just not up to the task.
]]>This post assumes that the reader is already familiar with both bayesianism and pymc. If you aren’t, I recommend that you check out the fantastic Bayesian Methods For Hackers.
* technically, everything in pymc is a Bayesian network, I know
We have observed 10 animals and noted 3 things about each of them: - does it swim like a duck? - does it quack like a duck? - is it, in fact, a duck?
1 2 3 4 5 6 7 8 |
|
It is easy to notice that in this dataset an animal is a duck if and only if it both swims like a duck and quacks like a duck. So far so good.
But what if someone forgets to write down whether the duck number 10 did any quacking or whether the animal number 9 was a duck at all? Now we have missing data. Here denoted by -1
1 2 3 4 5 |
|
This tells us about the last animal that it is a duck, but the information about swimming and quacking is missing. Nevertheless, having established the rule
we can infer that the values of swims_like_a_duck
and quacks_like_a_duck
must both be 1
for this animal.
This is what we will try to do here - learn the relationship between the variables and use it to fill in the missing ones.
To be able to attack this problem, let’s make one simplifying assumption. Let’s assume that we know the causal structure of the problem upfront. That is - we know that swimming and quacking are independent random variables, while being a duck is a random variable that potentially depends on the other two.
This is the situation described by this Bayesian network:
This network is fully characterised by 6 parameters - prior probabilities of swimming and quacking -
$P(swims)$, $P(quacks)$
- and conditional probability of being a duck given values of the other 2 variables -
$P(duck \mid swims \land quacks)$,
$P(duck \mid \neg swims \land quacks)$
- and so on. We don’t know anything about the values of these parameters, other than they must be between $0$ and $1$. The bayesian thing to do in such situations is to model the unknown parameters as random variables of their own and give them uniform priors.
Thus, the network expands:
This is the network describing a single animal, but actually we have observations of many animals, so the full network would look more like this:
There is only one node corresponding to each of the 6 parameters, but there are as many ‘swims’ and ‘quacks’ and ‘duck’ nodes as there are records in the dataset.
Some of the variables are observed (orange), others aren’t (white), but we have specified priors for all the parent variables and the model is fully defined. This is enough to (via Bayes theorem) derive the formula for the posterior probability of every unobserved variable and the posterior distribution of every model parameter.
But instead of doing math, we will find a way to programmatically estimate all those probabilities with pymc. This way, we will have a solution that can be easily extended to arbitrarily complicated networks.
What could go wrong?
Disclaimer: this is all hacky and inefficient in ways I didn’t realise it would be when I started working on it. pymc is not the right tool for the job, if you want to do this seriously, in a production environment you should look for something else. pymc3 maybe?
I will now demonstrate how to represent our quack-swim-duck Bayesian network in pymc and how to make predictions with it. pymc was confusing the hell out of me when I first started this project. I will be painstakingly explicit at every step of this tutorial to save the reader some of this confusion. Then at the end I will show how to achieve the same result with 1/10th as many lines of code using some utilities of my invention.
Let’s start with the unobserved variables:
1 2 3 4 5 6 7 8 9 10 11 |
|
Now the observed variables. pymc requires that we use masked arrays to represent missing values:
1 2 3 4 5 6 7 8 9 |
|
This is what a masked array with two missing values looks like:
1 2 3 4 |
|
Quacking and swimming nodes:
1 2 3 4 5 6 |
|
And now the hard part. We have to construct a Bernoulli random variable ‘duck’, whose conditional probability given its parents is equal to a different random variable for very combination of values of the parents. That was a mouthful, but all it means is that there is a conditional probability table of ‘duck’ conditioned on ‘swims’ and ‘quacks’. This is literally the first example in every textbook on probabilistic models. And yet, there is no easy way to express this relationship with pymc. We are forced to roll our own custom function.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
|
If you’re half as confused reading this code as I was when I was first writing it, you deserve some explanations.
pymc.distributions.Bernoulli
, but here we treat them like numpy arrays.This is @pymc.deterministic
’s doing. This decorator ensures that when this function is actually called it will be given swims.value
and quacks.value
as parameters - and these are indeed numpy arrays. Same goes for all the other parameters.
p
parameter of a pymc.Bernoulli
but now we’re using a function - duck_probability
Again, @pymc.deterministic
. When applied to a function it returns an object of type pymc.PyMCObjects.Deterministic
. At this point the thing bound to the name ‘duck_probability’ is no longer a function. It’s a pymc random variable. It has a value
parameter and everything.
Ok, let’s put it all together in a pymc model:
1 2 |
|
aaaand we’re done.
Not really. The network is ready, but there is still the small matter of extracting predictions out of it.
The obvious way to estimate the missing values is with a maximum a posteriori estimator. Thankfuly, pymc has just the thing - pymc.MAP
. Calling .fit
on a pymc.MAP
object changes values of variables in place, so let’s print the values of some of our variables before and after fitting.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
optimise the values:
1 2 |
|
and inspect the results:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
The two False
bits - in ‘swims’ and ‘quacks’ have flipped to True
and the values of the conditional probabilities have moved in the right direction! This is good, but unfortunately it’s not reliable. Even in this simple example pymc’s MAP rarely gets everything right like it did this time. To some extent it depends on the optimisation method used - e.g. pymc.MAP(model).fit(method='fmin')
vs pymc.MAP(model).fit(method='fmin_powell')
. Despite the warning message recommending ‘fmin’, ‘fmin_powell’ gives better results. ‘fmin’ gets the (more or less) right values for continous parameters but it never seems to flip the booleans, even when it would clearly result in higher likelihood.
The other way of getting predictions out of pymc is to use it’s main workhorse - the MCMC sampler. We will generate 200 samples from the posterior using MCMC and for each missing value we will pick the value that is most frequent among the samples. Mathematically this is still just maximum a posteriori estimation but the implementation is very different and so are the results.
1 2 3 |
|
This should have produced 200 samples from the posterior for each unobserved variable. To see them, we use sampler.trace
.
1 2 |
|
200 samples of the 'P(swims)'
paramter - as promised
1 2 |
|
200 samples of a conditional probability parameter.
1 2 |
|
swims
boolean variable also has 200 samples. But:
1 2 |
|
quacks
has two times 200 - because there were two missing values among quacks
observations - and each is modeled as an unobserved variable.
sampler.trace('duck')
produces only a KeyError
- there are no missing values in duck
, hence no samples.
Finally, posterior probability for the missing swims
observation:
1 2 |
|
Great! According to MCMC the missing value in swims
is more likely than not to be True
!
(sampler.trace('swims')[:]
is an array of 200 booleans, counting the number of True
and False
is equivalent to simply taking the mean).
1 2 |
|
And the two missing values in quacks
are predicted to be False
and True
- respectively. As they should be.
Unlike the MAP approach, this result is reliable. As long as you give MCMC enough iterations to burn in, you will get very similar numbers every time.
This was soul-crushingly tedious, I know. But it doesn’t have to be this way. I have created a few utility functions to get rid of the boilerplate - the creation of uniform priors for variables, the conditional probabilities, the trace, and so on. The utils can all be found here (along with some other stuff).
This is how to define the network using these utils:
1 2 3 4 5 6 7 8 9 |
|
(there are also versions of make_bernoulli
and cartesian_bernoulli_child
for categorical variables). And this is how to use it:
1 2 3 4 5 6 7 8 9 10 11 12 |
|
Next post: how all this compares to good old xgboost.
]]>watercooling, pretty lights and 2 x GTX 1080 (on the right)
This topic has been widely written about by better people so if you don’t already know about char-RNNs go read them instead. Here is Andrej Karpathy’s blog post that started it all. It has an introduction to RNNs plus some extremely fun examples of texts generated with them. For an in depth explanation of LSTM (the specific type of RNN that everyone uses) I highly recommend this.
I started playing with LSTMs by copying the example from Keras, and then I kept adding to it. First - more layers, then - training with generators instead of batch - to handle datasets that don’t fit in memory. Then a bunch of scripts for getting interesting datasets, then utilities for persisting the models and so on. I ended up with a small set of command line tools for getting the data and running the experiments that I thought may be worth sharing. Here it is.
A network with 3 LSTM layers 512 units each + a dense layer trained on the trained for a week on the concatenation of all java files from the hadoop repository produces stuff like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 |
|
That’s pretty believable java if you don’t look too closely! It’s important to remember that this is a character-level model. It doesn’t just assemble previously encountered tokens together in some new order. It hallucinates everything from ground up. For example setSchedulerAppTestsBufferWithClusterMasterReconfiguration()
is sadly not a real function in hadoop codebase. Although it very well could be and it wouldn’t stand out among all the other monstrous names like RefreshAuthorazationPolicyProtocolServerSideTranslatorPB
. Which was exactly the point of this exercise.
Sometimes the network decides that it’s time for a new file and then it produces the Apache software licence varbatim followed by a million lines of imports:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 |
|
At first I thought this was a glitch, a result of the network is getting stuck in one state. But then I checked and - no, actually this is exactly what those hadoop .java files look like, 50 lines is a completely unexceptional amount of imports. And again, most of those imports are made up.
And here’s a bunch of python code dreamt up by a network trained on scikit-learn repository.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
|
This is much lower quality because the network was smaller and sklearn’s codebase is much smaller than that of hadoop. I’m sure there is a witty comment about the quality of code in those two repositories somewhere in there.
And here’s the result of training on the scalaz repository:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 |
|
In equal measure elegant and incomprehensible. Just like real scalaz.
Enough with the github. How about we try some literature? Here’s LSTM-generated Jane Austen:
“I wish I had not been satisfied with the other thing.”
“Do you think you have not the party in the world who has been a great deal of more agreeable young ladies to be got on to Miss Tilney’s happiness to Miss Tilney. They were only to all all the rest of the same day. She was gone away in her mother’s drive, but she had not passed the rest. They were to be already ready to be a good deal of service the appearance of the house, was discouraged to be a great deal to say, “I do not think I have been able to do them in Bath?”
“Yes, very often to have a most complaint, and what shall I be to her a great deal of more advantage in the garden, and I am sure you have come at all the proper and the entire of his side for the conversation of Mr. Tilney, and he was satisfied with the door, was sure her situation was soon getting to be a decided partner to her and her father’s circumstances. They were offended to her, and they were all the expenses of the books, and was now perfectly much at Louisa the sense of the family of the compliments in the room. The arrival was to be more good.
That was ok but let’s try something more modern. And what better represents modernity than people believing that the earth is flat. I have scraped all the top level comments from top 500 youtube videos matching the query “flat earth”. Here is the comments scraper I made for this. And here is the neural network spat out after ingesting 10MB worth of those comments
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
|
That doesn’t make any sense at all. It’s so much like real youtube comments it’s uncanny.
]]>In the previous posts we started with two datasets “left” and “right”. Using tokenization and the magic of spark we generated for every left record a small bunch of right records that maybe correspond to it. For example this record:
1 2 3 4 5 6 7 |
|
got these two as candidate matches:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
And now we need to decide which - if any - is(are) the correct one(s). Last time we dodged this problem by using a heuristic “the more keys were matched, the better the candidate”. In this case the record with Id 'a'
was matched on both name and phone number while 'c'
was matched on postcode alone, therefore 'a'
is the better match. It worked in our simple example but in general it’s not very accurate or robust. Let’s try to do better.
The obvious first step is to use some string comparison function to get a continuous measure of similarity for the names rather than the binary match - no match. Levenshtein distance will do, Jaro-Winkler is even better.
1 2 3 |
|
and likewise for the phone numbers, a sensible measure of similarity would be the length of the longest common substring:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
This makes sense at least if the likely source of phone number discrepancies is area codes or extensions. If we’re more worried about typos than different prefixes/suffixes then Levenshtein would be the way to go.
Next we need to come up with some measure of postcode similarity. E.g. full match = 1, partial match = 0.5 - for UK postcodes. And again the same for any characteristic that can be extracted from the records in both datasets.
With all those comparison functions in place, we can create a better scorer:
1 2 3 4 5 6 7 8 9 10 |
|
This should already work significantly better than our previous approach but it’s still an arbitrary heuristic. Let’s see if we can do better still.
Evaluation of matches is a type of classification. Every candidate match is either true or spurious and we use similarity scores to decide which is the case. This dictates a simple approach:
It shouldn’t have been a surprise to me but it was when I discovered that this actually works and makes a big difference. Even with just 4 features matching accuracy went up from 80% to over 90% on a benchmark dataset just from switching from handpicked weights to weights fitted with logistic regression. Random forest did even better.
One more improvement that can take accuracy to the next level is iterative learning. You train your model, apply it and see in what situations is the classifier least confident (probability ~50%). Then you pick some of those ambiguous examples, hand-label them and add to the training set, rinse and repeat. If everything goes right, now the classifier has learned to crack previously uncrackable cases.
This concludes my tutorial on data matching but there is one more tip that I want to share.
Levenshtein distance, Yaro-Winkler distnce etc. are great measures of edit distance but not much else. If the variation in the string you’re comparing is due to typos ("Bruce Wayne"
-> "Burce Wanye"
) then Levenshtein is the way to go. Frequently though the variation in names has nothing to do with typos at all, there are just multiple ways people refer to the same entity. If we’re talking about companies "Tesco"
is clearly "Tesco PLC"
and "Manchester United F.C."
is the same as "Manchester United"
. Even "Nadbor Consulting Company"
is very likely at least related to "Nadbor Limited"
given how unique the word "Nadbor"
is and how "Limited"
, "Company"
and "Consulting"
are super common to the point of meaninglessness. No edit distance would ever figure that out because it doesn’t know anything about the nature of the strings it receives or about their frequency in the dataset.
A much better distance measure in the case of company names should look at the words the two names have in common, rather than the characters. It should also discount the words according to their uniqueness. The word "Limited"
occurs in a majority of company names so it’s pretty much useless, "Consulting"
is more important but still very common and "Nadbor"
is completely unique. Let the code speak for itself:
1 2 3 4 5 6 7 8 9 10 11 12 |
|
The above can be interpreted as the scalar product of the names in the Bag of Word representation in the idf space except instead of the logarithm usually used in idf I used a square root because it gives more intuitively appealing scores. I have tested this and it works great on UK company names but I suspect it will do a good job at comparing many other types of sequences of tokens (not necessarily words).
]]>To match two datasets:
This data matching algorithm could easily be implemented in the traditional single-machine, single-threaded way using a collection of hashmaps. In fact this is what I have done on more than one occasion and it worked. The advantage of spark here is built-in scalability. If your datasets get ten times bigger, just invoke spark requesting ten times as many cores. If matching is taking too long - throw some more resources at it again. In the single-threaded model all you can do is up the RAM as your data grows but the computation is taking longer and longer and there is nothing you can do about it.
As an added bonus, I discovered that the abstractions Spark forces on you - maps, joins, reduces - are actually appropriate for this problem and encourage a better design than the naive implementation.
In the spirit of TDD, let’s start by creating a test case. It will consist of two RDDs that we are going to match. Spark’s dataframes would be even more natural choice if not for the fact that they are completely fucked up.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 |
|
First step in the algorithm - tokenize the fields. After all this talk in the last post about fancy tokenizers, for our particular toy datasets we will use extremely simplistic ones:
1 2 3 4 5 6 7 8 9 10 11 12 |
|
Now we have to specify which tokenizer should be applied to which field. You don’t want to use the phone tokenizer on a person’s name or vice versa. Also, tokens extracted from name shouldn’t mix with tokens from address or phone number. On the other hand, there may be multiple fields that you want to extract e.g. phone numbers from - and these tokens should mix. Here’s minimalistic syntax for specifying these things:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
And here’s how they are applied:
1 2 3 4 5 6 7 8 9 10 11 |
|
The result is a mapping of token -> Id in the form of an RDD. One for each dataset:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
|
Now comes the time to generate candidate matches. We do that by joining records that have a token in common:
1 2 3 4 5 |
|
Result:
1 2 3 4 5 |
|
With every match we have retained the information about what it was joined on for later use. We have 4 candidate matches here - 2 correct and 2 wrong ones. The spurious matches are (1, 'c')
- Bruce Wayne and Alfred Pennyworth matched due to shared address; (2, 'a')
- Bruce Wayne and Thomas Wayne matched because of the shared last name.
Joining the original records back to the matches, so they can be compared:
1 2 3 4 5 6 7 8 9 |
|
We’re almost there. Now we need to define a function to evaluate goodness of a match. Take a pair of records and say how similar they are. We will cop out of this by just using the join keys that were retained with every match. The more different types of tokens were matched, the better:
1 2 |
|
We also need a function that will say: a match must be scored at least this high to qualify.
1 2 |
|
And now, finally we use those functions to evaluate and filter candidate matches and return the matched dataset:
1 2 3 4 5 6 |
|
The result:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
|
Glorious.
Now is the time to put “generic” back in the “generic data matching pipeline in spark”.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
|
To use it, you have to inherit from DataMatcher
and override at a minimum the get_left_tokenizers
and get_right_tokenizers
functions. You will probably want to override score_match
and is_good_enough_match
as well, but the default should work in simple cases.
Now we can match our toy datasets in a few lines oc code, like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
Short and sweet.
There are some optimisations that can be done to improve speed of the pipeline, I omitted them here for clarity. More importantly, in any nontrivial usecase you will want to use a more sophisticated evaluation function than the default one. This will be the subject of the next post.
]]>You have a dataset of - let’s say - names and addresses of some group of people. You want to enrich it with information scraped from e.g. linkedin or wikipedia. How do you figure out which scraped profiles match wich records in your database?
Or you have two datasets of sales leads maintained by different teams and you need to merge them. You know that there is some overlap between the records but there is no unique identifier that you can join the datasets on. You have to look at the the records themselves to decide if they match.
Or you have a dataset that over that years has collected some duplicate records. How do you dedup it, given that the data format is not consistent and there may be errors in the records?
All of the above are specific cases of a problem that can be described as: Finding all the pairs (groups) of records in a dataset(s) that correspond to the same real-world entity.
This is what I will mean by data matching in this post.
This type of task is very common in data science, and I have done it (badly) many times before finally coming up with a generic, clean and scalable solution. There are already many commercial solutions to specific instances of this problem out there and I know of at least two startups whose main product is data matching. Nevertheless for many usecases a DIY data matching pipeline should be just as good and may be easier to build than integration with an external service or application.
The general problem of data matching will be easier to discuss with a specific example in mind. Here goes:
You work at a company selling insurance to comic book characters. You have a database of 50.000 potential clients (these are the first 3):
1 2 3 4 5 |
|
and you just acquired 400.000 new leads:
1 2 3 4 5 |
|
You need to reconcile the two tables - find which (if any) records from the second table correspond to records in the first. Unfortunately data in both tables is formatted differently and there are some typos in both (“Burce”, “Ogtham”). Nevertheless it is easy for a human being to figure out who is who just by eyeballing the two datasets. from the first table and from the second clearly refer to the same person - Bruce Wayne. And matches - Bruce Banner. The remaining records - Thomas Wayne and Alfred Pennyworth don’t have any matches.
Now, how to do the same automatically and at scale? Comparing every record from one table to every one in the other - comparisons - is out of the question.
Like any red-blooded programmer, when I see a list of things to be matched to other things, I immediately think: hashmaps. My internal dialogue goes something like this:
This is beginning to sound unwieldy, but it’s basically the correct approach and - I strongly suspect - the only workable approach. At least as long as we’re not taking the hashmaps too literally. But let’s reformulate it in slightly more abstract terms before diving into implementation.
Datamatching must consist of two stages:
In this post I will assume that we have a good way of evaluating candidate matches (step 2) and concentrate only on step 1. In fact 2 is crucial and usually harder than 1 but it’s very case-specific. More on that topic next time.
When is a pair of records a good candidate for a match? When the records have something in common. What? For example one of the fields - like phone number or name or address. That would definitely suffice but it’s too restrictive. Consider Bruce Wayne from our example. In the first table:
1 2 3 4 5 6 7 |
|
And in the second table:
1 2 3 4 5 6 7 |
|
Not a single field in common between these two records and yet they clearly represent the same person.
It is tempting to try to normalise phone numbers by stripping area extensions, fix misspeled names, normalise order of first-, middle-, last name, etc. And sometimes this may be the way to go. But in general it’s ambiguous and lossy. There will never be a normalisation function that does the right thing for every possible version of the name:
1 2 3 4 |
|
What we can do instead is extract multiple tokens (think: multiple hashes) from the name. A pair of records will be considered a candidate match if they have at least one token in common.
We can for example just split the name on whitespaces:
1
|
|
and have this record be matched with every “Bruce” every “Thomas” and every “Wayne” in the dataset. This may or may not be a good idea depending on how many “Bruces” there are in this specific dataset. But tokens don’t have to be words. We can try bigrams:
1
|
|
Or we can try forming initials:
1
|
|
If we did that, then for instance both “Bruce Wayne” and “Burce T. Wayne” would get “B. Wayne” as one of the tokens and would end up matched as a result.
If we tokenize by removing vowels, that would go a long way toward fixing typos and alternative spellings while introducing few false positives:
1 2 |
|
There are also algorithms like Soundex that tokenize based on how a word sounds regardless of how it’s spelled. Soundex gives the same result for “Bruce” and “Broose” and “Bruise” or for “John” and “Jon”. Very useful given that a lot data entry is done by marketers who talk to people (and ask their names) over the phone.
Finally, nothing stops us from using all of the above at the same time:
1
|
|
With this, all the different ways of spelling “Bruce Wayne” should get at least one token in common and if we’re lucky few other names will.
This was an overview of name tokenization. Other types of data will require their own tokenization rules. The choice of tokenization logic necessarily depends on the specific data type and dataset but these the general principles:
One not name-related example: phone numbers. Since people enter phone numbers in databases in one thousand different formats with all kinds of rubbish area codes and extensions you shouldn’t count on raw phone numbers matching perfectly. An example of a sensible tokenizer is one that first strips all non-digit characters from the phone number then returns the last 8 digits.
1
|
|
Or to guard against people putting extensions at the end of phone numbers, we can extract every consecutive 8 digits:
1
|
|
This should catch any reasonable way of writing the number while still having very low likelihood of a collision.
To match two datasets:
Coming up: how to implement this in spark.
]]>Here’s a list of real examples of irritating recruiter behaviors together with guidlines on how I expect a reasonable person to act instead.
Before we start you must understand where I’m coming from. I’m not constantly looking for new jobs. But when I do, I apply en masse and then I have to deal with many recruiters at once. Each one constantly calling me in the office or at home (what is it with your obsession with phone calls? Why not email?). This in itself can get annoying but on top of that there are other more serious offenses. I am usually extremely agreeable both in person and over the phone. A blog post is my way of expressing complaints that I wouldn’t dare make in a conversation with you. It’s also a form of catharsis so forgive me if I get somewhat snarky and don’t take offense if you personally are not guilty of the sins mentioned here.
I have an exciting opportunity with a global company, one of the leaders in its field. When can we arrange a call?
This tells me nothing. Everyone is either a “leader in their field” or an “exciting startup” or a “global brand”. I’m getting 5 of these a day. If I am to commit any time to it I first need to know:
Ideally, if possible, also
This information is typically enough for me to decide if I’m interested or not. I will let you know if I am. I really don’t need to listen to you talk about their nice offices and what fancy university has the CTO attended. If I’m interested in these things I will look them up myself. This conversation is not going to change my mind one way or the other.
If you want to chat with me because you need to vet me before passing my CV on to the company please say so and indicate clearly what your decision is afterwards. And don’t do this bait-and-switch on me where I express interest in one position and you call me to discuss but only try to peddle another.
In short, this is how it’s going to work:
If you absolutely need to hear my voice and vet me, say so. If I’m interested in the role, we can get it over with once.
Let me know what is the best time to call you so that I can talk you through the interview process
Why? It’s a job interview not open heart surgery. There will be a 20 minutes phone call with HR, then an hour with some technical person then 3 hours of technical on-site interviews then a brief chat with some higher-up. Or a homework assignment, then technical phone call, then on-site, then HR. Or some other configuration. Whatever the case, you could’ve explained it in the email and save us both time. Frankly, I don’t care what the format is. All I need from you is the time and place and it is definitely possible to send those by email.
I will call you before the interview to give you a heads up
No.
Are you interviewing with any other companies? Tell me, so we can adjust the schedule so that you don’t miss this opportunity
What you mean is “we can adjust the schedule so that you don’t get the chance to interview with anyone else” (more on that later). Thank you very much. I don’t want to lie to you, so I’m just not going to tell you anything. If I actually need to speed things up because of other interviews, I will let you know. By email.
How it is going to work:
Please call me after the interview to tell me how did it go …
It went well. Probably. Or maybe not. Either way it doesn’t really matter how I think it went, does it?
… what was it like …
It was like every other interview I’ve been to. First they introduced themselves and the company, then we talked about my resume, then about my motivation and finally they gave me some vaguely job-related problems to solve and I solved them. What else did you expect?
… how did you like them
I liked them fine. Or maybe I loved them. Or maybe I thought they were boring. You won’t find out by calling though, because I’m only gonna feed you some enthusiastic sounding platitudes because I want to get to the next stage in this process. There is no upside for me in telling you that I thought the interviewers were boring and dumb even if it was true. So just don’t ask. In the unlikely event that I hated the interview enough to make me want to withdraw my application, I promise to let you know. By - you guessed it - email.
How it’s going to work:
So I made it through all the rounds and I’m expecting to hear back from the company. You call me saying you have some positive initial feedback, then we have this conversation:
You: If they come back with an offer of £n are you going to accept?
Me: I don’t know. I’ll need a couple of days to think about it.
You: Why wouldn’t you accept it? What is wrong with the offer?
Me: Nothing is wrong with it. I just need some time to consider my options. Not very long, just the weekend.
You: If there is nothing wrong with the offer then you should take it. I need to get back to them and tell them that you are going to take the offer. Otherwise they will think that you are not really motivated to work with them and they will not make the offer.
Me: What are you talking about?! They know I’m motivated. But this is a serious decision and I will not make it without proper consideration.
You: They have deadlines, you know. They can’t wait forever.
Me: It’s only two days!
You: If I don’t tell them you will accept, they will keep interviewing and they can find someone else. This is a very buyoant market!
…
This is bullshit and you know it. This is not a heist movie and you’re not looking for a last minute replacement safe cracker. The job ad has been out for months. I think they can wait two more days.
One time this happened to me right after the interview. The recruiter didn’t even wait to hear from the company in question. He based this whole routine on my reporting that the interview went well and that we discussed money at the end. This was also a case of him putting me up for a job that paid 10% less than the one I held at the time even though I explicitly required that my new position pays 10% more. I only found out during the final interview.
Another recruiter upon learning that I’m interviewing with other companies got my final interview cancelled. My first interview was on Monday and the final one was supposed to take place a week later. On Wednesday he asked me about other interviews, then we’ve had the bullshit “you’ve got to accept” conversation. When I didn’t budge, the recruiter got the company to make me an offer that evening, skipping the final interview. The offer came with an expiration date on 12am the following day! 4 days before the planned interview. He even blurted out that the insane timing was because they were afraid they would lose me, before covering up with the deadlines and buyoant markets bs. That they have to resort to this type of tactics is all the proof I need that the market for data scientists isn’t buyoant at all. Obviously they were afraid I would accept another offer. And the only reason they were afraid was that the recruiter tipped them off.
I understand now that we’re not allies. We are not exactly oponents but you’re definitely playing for a different team.
So I am not going to make it easy for your team. I will not get caught up in the false sense of urgency you’re trying to create (one recruiter tried to get me to come to the office and sign the contract on a Sunday) … ever again. I will not fall for scare tactics. I will not reveal any information about other interviews I may be having. Here’s what will happen:
It’s worth noting that I have never experienced anything resembling the “bullshit conversation” when I dealt with a potential employer directly. When applying through recruiters it happened every time.
If these ground rules seem unreasonable to you, then save us both the trouble and don’t contact me.
Yours Exasperatedly
nadbor
I didn’t intend to make it read like I’m bragging about all the offers I’m getting like I’m God’s gift to data science who can get any job he wants. It’s just that I’ve interviewed a lot over the years so in addition to dozens of rejections I have had several successes too. I’m only describing here the more successful examples because in case of rejection recruiters tend to mercifully leave me alone.
]]>There are 4 million active companies in the UK and Ireland. DueDil collects all kinds of information about them - financials, legal structures, contact info, company websites, blogs, news appearences etc. All of it is presented to our users and some of it also serves as input to machine learning tasks - like classifying companies into industries.
One very interesting dataset that remains underutilised (AFAIK by anyone, not just DueDil) is the network of connections of companies and directors.
You can tell a lot about a company just by looking at its directors. That is - if you know anything about these people. At DueDil we don’t know much more than just their identities. This would be rather useless in the context of a single company. But there are millions of companies and people who serve as their directors more often then not do it many times in their careers at different companies. Knowing that the director’s name is Jane Brown may be useless, but knowing that the director previously held similar positions at three different tech startups is highly relevant. And this is just one director out of many and one type of relationship.
More generally, one can think about companies as nodes in a graph. Two companies are connected iff there is a person who has served as a director at both of them (not necessarily at the same time). I will call this the company graph. Here’s a part of the graph containing DueDil.
DueDil is connected to Founders For Good Ltd because our CEO Damian Kimmelman is also a director at the other company.
It is intuitive that the position of a company in this graph tells us something about the company. It is however difficult to do anything with this information unless it is somehow encoded into numbers.
This is where word embeddings come in. As I mentioned previously, it is possible to apply Word2Vec to a graph to get an embedding of graph nodes as real-valued vectors in a procedure called DeepWalk. The idea is very simple:
A random walk is just a sequence of nodes, where the next node is always one of the neighbours of the previous node, chosen at random. Think: Duedil -> Founders For Good Ltd -> Omio Limited.
Word2Vec accepts a collection of documents - where every document is a list of tokens (strings). Here company Id’s play the role of tokens and random walks play the role of documents. It all checks out.
To limit the size of the graph for this proof of concept, I have applied this procedure only to the 2.2 million companies that
I generated 10 random walks starting at every company, the length of each walk was 40. Training Word2Vec with gensim on this corpus of $10 \times 40 \times 2200000 = 8.8 \times 10^8$ tokens took over 11h. It also took a machine with 40gb of RAM before it stopped crashing even though the random walks were generated on-line.
Finally I got some vectors out of it - one per company. These vectors themselves were the goal of this project (they can serve as features in ML), but I also made some plots to verify that the algorithm is working as advertised.
The embedding produced by DeepWalk was 100-dimensional in this case, so I had to do some dimensionality reduction before trying to visualize the vectors. t-SNE is perfect for this kind of thing. Here’s a sample of 40000 company vectors embedded in 2D with t-SNE. You can move or zoom in the plot or hover over the dots to see the names of the corresponding companies.
It worked! You can immediately see that there is some structure in the data and t-SNE has picked up on it (and this is only a tiny sample - 2% of all the datapoints). What does this structure mean? After the graph has beed transformed with DeepWalk and then t-SNE, the position of a company in this plot doesn’t have a simple interpretation but it’s clear that groups of highly interconnected companies will correspond to clusters of points in this plot. And it’s easy to verify just by looking at the names of the companies that this is the case.
Take the big blob in the upper left corner - the companies there:
We have discovered the cluster of irish companies! And if you zoom in on the long, narrow appendage sticking out of this cluster towards bottom left - you’ll see companies like:
… and hundreds more. This is not even cherry-picked. I hereby declare the discoverery of the Irish Aviation Peninsula.
Slightly up and to the right of center there is a smaller scottish cluster recognizable through such companies as
There are many other smaller clusters and it’s actually a fun exercise to try to pinpoint exactly what do the companies in a cluster have in common.
This was fun if somewhat grim looking. Let’s try to add some color to the plot. The original goal of this project was to get graph-derived features for industry classification. Let’s try using different colors to denote different industries (based on SIC codes). If DeepWalk coordinates are predictive of the industry a company is in, we should expect to see same-colored dots (companies in the same industry) clustering together in the plot. Does this actually happen?
A little bit, yes.
Mostly everything is a big reddish mess (“services” is the most popular category). But there are indeed some clusters. Right of center we can see a medium sized pink blob of insurance companies:
Below it and to the left lies another, this one green:
Clearly this is a cluster of film companies (plus other media). If you look more closely you will discover that this is actually the cluster of London based film companies. Nearby there is a smaller green cluster of media companies from the rest of England and another one for Wales. These are less clearly delimited and partly obscured by the red dots of “Services” companies. There are many others, but they are sometimes so tight, they appear as a single dot in the plot.
This is more noisy than I hoped for but it’s definitely working. Would definitely improve accuracy of industry classification if used with other stronger features. Plus you can learn interesting things from it just by looking at the plot. Like the fact that film production companies are closely connected to each other and relatively unconnected to the rest of the world. Or that London is a different country as far as media companies are concerned.
Having all this t-SNE and Bokeh niceness in place I couldn’t resist applying it to another interesting dataset - keywords. Keywords are a set of industry related tags that DueDil has for millions of companies. They are things like “fishing” or “management consulting” or “b2b”. A company usually has between a few and a few dozen of them.
A byproduct of the pipeline that extracts keywords for companies is a Word2Vec embedding of the keywords. I used this embedding to create an embedding of companies. This was done simply by averaging all the vectors corresponding to a company’s keywords. I ran the resulting vectors through t-SNE and here’s what it looks like:
I shouldn’t be surprised that keywords - which were picked to be industry related - predict the industry really well. But I was blown away by the level of detail preserved by t-SNE. There are recognizable islands for everything. There is a golden Farmers Archipelago and a narrow blue Dentist Island south from Home Care Island. There is a separate Asian Island in the Restaurant Archipelago - go see for yourself.
This was fun. Long live Word2Vec and t-SNE!
]]>Dataframes in pyspark are simultaneously pretty great and kind of completely broken.
On the other hand:
df.cache()
dataframes sometimes start throwing key not found
and Spark driver dies. Other times the task succeeds but the the underlying rdd becomes corrupted (field values switched up).But the biggest problem is actually transforming the data. It works perfectly on those contrived examples from the tutorials. But I’m not working with flat SQL-table-like datasets. Or if I am, they are already in some SQL database. When I’m using Spark, I’m using it to work with messy multilayered json-like objects. If I had to create a UDF and type out a ginormous schema for every transformation I want to perform on the dataset, I’d be doing nothing else all day, I’m not even joking. UDFs in pyspark are clunky at the best of times but in my typical usecase they are unusable. Take this, relatively tiny record for instance:
1 2 3 4 5 6 7 8 9 10 11 12 |
|
the correct schema for this is created like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
And this is what I would have to type every time I need a udf to return such record - which can be many times in a single spark job.
For these reasons (+ legacy json job outputs from hadoop days) I find myself switching back and forth between dataframes and rdds. Read some JSON dataset into an rdd, transform it, join with another, transform some more, convert into a dataframe and save as parquet. Or read some parquet files into a dataframe, convert to rdd, do stuff to it, convert back to dataframe and save as parquet again. This workflow is not so bad - I get the best of both worlds by using rdds and dataframes only for the things they’re good at. How do you go from a dataframe to an rdd of dictionaries? This part is easy:
1
|
|
It’s the other direction that is problematic. You would think that rdd’s method toDF()
would do the job but no, it’s broken.
1
|
|
actually returns a dataframe with the following schema (df.printSchema()
):
1 2 3 4 5 6 7 8 |
|
It interpreted the inner dictionary as a map
of boolean
instead of a struct
and silently dropped all the fields in it that are not booleans. But this method is deprecated now anyway. The preferred, official way of creating a dataframe is with an rdd of Row
objects. So let’s do that.
1 2 3 4 |
|
prints the same schema as the previous method.
In addition to this, both these methods will fail completely when some field’s type cannot be determined because all the values happen to be null in some run of the job.
Also, quite bizarrely in my opinion, order of columns in a dataframe is significant while the order of keys is not. So if you have a pre-existing schema and you try contort an rdd of dicts into that schema, you’re gonna have a bad time.
Without further ado, this is how I now create my dataframes:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
This doesn’t randomly break, doesn’t drop fields and has the right schema. And I didn’t have to type any of this StructType([StructField(...
nonsense, just plain python literal that I got by running
1
|
|
As an added bonus now this prototype is prominently displayed at the top of my job file and I can tell what the output of the job looks like without having to decode parquet files. Self documenting code FTW!
And here’s how it’s done. First we need to implement our own schema inference - the way it should work:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
Using this we can now specify the schema using a regular python object - no more java-esque abominations. But this is not all. We will also need a function that transforms a python dict into a rRw object with the correct schema. You would think that this should be automatic as long as the dict has all the right fields, but no - order of fields in a Row is significant, so we have to do it ourselves.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
|
And finally:
1 2 3 4 5 |
|
The basic idea is that semantic vectors (such as the ones provided by Word2Vec) should preserve most of the relevant information about a text while having relatively low dimensionality which allows better machine learning treatment than straight one-hot encoding of words. Another advantage of topic models is that they are unsupervised so they can help when labaled data is scarce. Say you only have one thousand manually classified blog posts but a million unlabeled ones. A high quality topic model can be trained on the full set of one million. If you can use topic modeling-derived features in your classification, you will be benefitting from your entire collection of texts, not just the labeled ones.
Ok, word embeddings are awesome, how do we use them? Before we do anything we need to get the vectors. We can download one of the great pre-trained models from GloVe:
1 2 |
|
and use load them up in python:
1 2 3 4 5 |
|
or we can train a Word2Vec model from scratch with gensim:
1 2 3 4 |
|
We got ourselves a dictionary mapping word -> 100-dimensional vector. Now we can use it to build features. The simplest way to do that is by averaging word vectors for all words in a text. We will build a sklearn-compatible transformer that is initialised with a word -> vector dictionary.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
Let’s throw in a version that uses tf-idf weighting scheme for good measure
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
|
These vectorizers can now be used almost the same way as CountVectorizer
or TfidfVectorizer
from sklearn.feature_extraction.text
. Almost - because sklearn vectorizers can also do their own tokenization - a feature which we won’t be using anyway because the benchmarks we will be using come already tokenized. In a real application I wouldn’t trust sklearn with tokenization anyway - rather let spaCy do it.
Now we are ready to define the actual models that will take tokenised text, vectorize and learn to classify the vectors with something fancy like Extra Trees. sklearn’s Pipeline is perfect for this:
1 2 3 4 5 6 7 8 9 |
|
I benchmarked the models on everyone’s favorite Reuters-21578 datasets. Extra Trees-based word-embedding-utilising models competed against text classification classics - Naive Bayes and SVM. Full list of contestants:
Each of these came in two varieties - regular and tf-idf weighted.
The results (on 5-fold cv on a the R8 dataset of 7674 texts labeled with 8 categories):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
SVM wins, word2vec-based Extra Trees is a close second, Naive Bayes not far behind. Interestingly, embedding trained on this relatively tiny dataset does significantly better than pretrained GloVe - which is otherwise fantastic. Can we do better? Let’s check how do the models compare depending on the number of labeled training examples. Due to its semi-supervised nature w2v should shine when there is little labeled data.
That indeed seems to be the case. w2v_tfidf
’s performance degrades most gracefully of the bunch. SVM
takes the biggest hit when examples are few. Lets try the other two benchmarks from Reuters-21578. 52-way classification:
Qualitatively similar results.
And 20-way classification:
This time pretrained embeddings do better than Word2Vec and Naive Bayes does really well, otherwise same as before.
At this point I have to note that averaging vectors is only the easiest way of leveraging word embeddings in classification but not the only one. You could also try embedding whole documents directly with Doc2Vec. Or use Multinomial Gaussian Naive Bayes on word vectors. I have tried the latter approach but it was too slow to include in the benchmark.
Update 2017: actually, the best way to utilise the pretrained embeddings would probably be this https://blog.keras.io/using-pre-trained-word-embeddings-in-a-keras-model.html I shall add this approach to the benchmark when I have some time.
Overall, we won’t be throwing away our SVMs any time soon in favor of word2vec but it has it’s place in text classification. Like when you have a tiny training set or to ensemble it with other models to gain edge in Kaggle.
Plus, can SVM do this:
1 2 3 4 5 6 7 8 9 10 |
|
prints
1
|
|