21 Aug 2020 •
2 min. read
Hey, what do you mean with “Docs-as-Code”?
The concept “Docs-as-Code” is basically similar to the way software engineers:
- Write code,
- Build an executable,
- Test it, and then publish the deliverable.
In technical writing terms, it can look something like:
- Store your content source in a version control system like GitHub (typically in a format like Markdown),
- Using static site generators like Middleman, Gatsby, Hugo, Jekyll, VuePress, MKDocs etc.,
- Produce a documentation site, running some validation checks (like broken links) and then publish it to your hosting provider.
Should I treat documentation the same as my source files?
Source code and documentation files (even if written in MD) are not the same.
A source code file is in plain text. A compiler reads the file and converts it into a machine-readable format (like an executable file).
A documentation file on the other hand will require extra elements, such as:
- A link to an image (where will it be hosted),
- Who is going to upload what,
- Different rich styles like Tables, Tabs, Source code viewer, etc.
In terms of source code files, compilers are pretty mature and stable. If there are syntax errors (not functional errors) the compiler will catch them immediately.
Converting Markdown (using a static code generator parser) to HTML is prone to errors. There is no defined syntax for formats like MD, merely various flavours.
Challenges encountered when using this approach:
- Simple fixes are complex,
- Editorial workflow and review processes,
- Image management and preview,
- Category management,
- Search implementation,
- When devs need to write technical docs, things can go frantic.
Is it worth the trouble?
Docs differ significantly when compared to source code. In theory, it might look fascinating to go down the “Docs-as-Code” path.
In practice it can get quite rough, especially when you are a “single” guy creating software documentation in a few GitHub repos, or writing some technical posts. If that’s the case, I suggest skipping or you should like self-punishment.
Big companies with dedicated teams should look at tools like docToolChain. The philosophy of docToolchain is that software documentation should be treated in the same way as code together with the arc42 template for software architecture.
Further reading (English books)
Note: the off-shore company I currently work for implemented Docs-as-Code. Will keep you updated on the progress.
29 Jun 2018 •
12 min. read
These are my personal notes, broadly covering the BASICS necessary for machine learning and artificial intelligence.
Some final caveats:
- This post may not be helpful for your purposes.
- This is still very much a work in progress and it will be changing a lot.
- Some content may be out of order, missing. Don’t get upset.
- The notes are created in (GitHub flavored) markdown, so unfortunately lack snazzy interactivity.
- Part of this material is adapted, sometimes directly copied from elsewhere. I have tried to give credit where due.
The raw notes are open sourced - should you encounter errors, have a better way of explaining something, don’t hesitate to submit a pull request.
Table of Contents
- What is Machine Learning?
- 1.1 Functions
- 1.2 Algorithms - Grouped by Learning Style
- 1.3 Supervised v. Unsupervised
- 2.1 Regression
- 2.2 Classification
- In Practice
1. What is Machine Learning?
Machine learning provides the foundation for artificial intelligence. We train our software model using data e.g. the model learns from the training cases and then we use the trained model to make predictions for new data cases.
Let’s start with a data set that contains historical records aka observations. Every record includes numerical features (X) quantifying characteristics of the item we are working with.
There are also values we try to predict (Y). We will use training cases to train the machine learning model so that it calculates a value for (Y), from the features in (X). Simply said, we are creating a function that operates on a set of features, (X), to produce predictions, (Y):
f: X → Y.
At heart, a function is the mapping from a set in a domain to a set in a codomain. A function can map a set to itself. For example,
f(x) = x2, also notated
f: x ↦ x2, is the mapping of all real numbers to all real numbers, or
f: R → R.
The range is the subset of the codomain which the function maps to.
Functions don’t necessarily map to every value in the codomain. Where they do, the range equals the codomain.
There are 2 sorts of functions. Functions which map to
R, are known as scalar-valued or real-valued.
Functions which map to
n > 1 are known as vector-valued.
Ref: Web: Mathworld Wolfram - Eric W. Weisstein.
1.2 Algorithms - Grouped by Learning Style
- Supervised learning - the algorithm is given a pre-labeled training example to learn from.
- Unsupervised learning - the algorithm is given unlabeled examples.
- Semi-supervised learning - the algorithm uses a mix of labeled & unlabeled data.
- Active learning - similar to semi-supervised learning, but the algorithm can “ask” for extra labeled data based on what it needs to improve on.
- Reinforcement learning - actions are taken and rewarded, or penalized; goal is maximizing lifetime/long-term reward (or vice versa).
Ref: Book: Neural Computing: Theory and Practice (1989) - Philip D. Wasserman.
Note: Following course guidelines, we’ll discuss the two most common methods; supervised and unsupervised.
1.3 Supervised v. Unsupervised
In a supervised learning scenario, we start with observations that include known values for the variable we want to predict. We call these labels.
Because we are starting with data that includes the label we are trying to predict, we can train the model using only some data and hold the rest for evaluating our models performance.
We’ll then use a algorithm to train a model that fits features to the known label.
As we started with a known label value, we can validate the model by comparing the value predicted by the function to the actual label value that we knew. Then, when we’re happy that the model works, we can use it with new observations for which the label is unknown, and generate new predicted values.
- m = number training of examples
- x’s = input variables or features
- y’s = output variables or the “target” variable
- (x(i), y(i)) = the ith training example
- h = hypothesis i.e. the function that the algorithm learns, taking x’s as input and outputting y’s
In a unsupervised learning scenario, we don’t have any known label values in our training data set.
We’ll train the model by finding similarities between observations. Once we have trained this model, more observations are added to a cluster of observations with alike characteristics. (Cluster = Group)
When we need to predict continuous valued output (i.e. a numeric value), we use a supervised learning technique called regression.
Let’s take one male. We want to model the calories burned while exercising.
First we get some pre-liminary data (age: 34, gender: 1, weight: 60, height: 165), then put him on a fitness monitor and capture additional information. Now what we do is model the calories burned using features from his exercise like his heart rate: 134, temperature: 37, and duration: 25.
In this case we know all features and have a known label value of 231 calories. So we need our algorithm to learn a function, that operates of all the males exercise features to give us a net result of 231.
f([34, 1, 60, 165, 134, 37, 25]) = 231
A sample of one person isn’t likely to give a function that generalizes well. So we gather the same data from a large number participants, and then train the model using the bigger set of data.
f([X1, X2, X3, X4, X5, X6, X7]) = Y
Now having a new function that can be used to calculate label (Y), we can finally plot the values of (Y) calculated for specific features of (X) values, on a chart:
And we can interpolate any new values of (X) to predict an unknown (Y).
As we started with data that includes the label we try to predict, we can train the model using some data and keep the rest for evaluating the models performance. Then we can use the model to predict (F) of (X) for evaluation data, and compare the predictions or scored labels to the actual labels that we know to be true.
The difference between the predicted and actual levels are called the residuals. And they can tell us something about the error level in the model.
We can measure the error in the model using root-mean-square error or (RMSE) and mean absolute error (MAE).
Both are absolute measures of error in the model. For example, having an RMSE value of 5 would mean that the standard deviation of error from our test error is 5 calories. An error of 5 calories seems to indicate a reasonably good model, but let’s suppose we are predicting how long an exercise session takes. An error of 5 hours would be a very bad model.
You might want to evaluate the model using relative metrics, to indicate a more general level of error as a relative value between 0 and 1. Relative absolute error (RAE) and relative squared error (RSE) produce metrics where the closer to 0 the error, the better the model.
Coefficient of determination (CoDR) or R squared, is another relative metric, but this time a value closer to 1 indicates a good fit for the model.
Another kind of supervised learning is called classification.
Classification is the technique that we can use to predict which class or category something belongs to. A simple variant is binary classification, where we predict whether entities belong to one of two classes (true or false).
Example, we’ll take a number of patients in a health clinic, gather some personal details e.g. age: 23, pregnancy: 1, glucose: 171, BMI: 43.5, run tests, and identify which patients are diabetic and which are not.
We could learn a function that can be applied to the patient features and give us the result 1 for patients that are diabetic:
f([23, 1, 171, 43.5]) = 1
and 0 for patients that aren’t.
Generally, a binary classifier is a function, that can be applied to features (X), to produce a (Y) value of 1 or 0. This function won’t actually calculate the absolute value of 1 or 0. Instead, it will calculate a value between 1 and 0:
Y(f(x)), and we’ll use a threshold value to decide whether the result should be counted as a 1 or a 0.
When using this model to predict values, the resulting value is classed as 1 or 0, depending on which side of the threshold line it falls.
Because classification is a supervised learning technique, we withhold some of the test data to validate the model using known labels.
Cases where the model predicts a 1 for a test observation, while holding a label value of 1, are considered true positives.
Cases where the model predicts 0, and the actual label is 0, are true negatives.
If the model predicts 1, but the actual label is a 0, that’s a false positive.
If the model predicts 0, but the value is 1, we have a false negative.
The treshold determines how predicted values are classified. In the case of our diabetes model, having more false positives, thus reducing the amount of false negatives, will be better as more people with a risk of diabetes get identified.
The number of positives and negatives that are generated by a model is crucial in evaluating its effectiveness. For that purpose we use this confusion matrix e.g. our basis for calculating performance metrics for the classifier.
Ref: Web: MSXDAT262017 - edX.
TO BE CONTINUED
9. In Practice
Before you start building your machine learning system, you should:
- Be explicit about the problem.
- Start with a specific question. What do you want to predict, and what tools do you have to predict it with?
- Brainstorm possible strategies like what features might be useful or do you need to collect more data?
- Try and find good input data.
- Randomly split data into: training samples, testing samples and validation samples.
- Use features of, or features built from the data, that may help with making predictions.
- Start with a simple algorithm which can be implemented quickly.
- Test the simple algorithm on your validation data, evaluate the results.
- Plot learning curves to decide where things need work. As example, do you need more data, features?
- Analysis: manually examine examples in the validation set your algorithm made errors on.
To generate a learning curve, you deliberately shrink the size of the training set and see how training and validation errors will change as you increase the size.
With smaller training sets, we expect the training error will be low because it is easier to fit to less data. As the training set size grows, your average training set error is expected to grow.
Conversely, we expect the average validation error to decrease as the training set size increases.
If your training and validation error curves flatten out at a high error as set sizes increase, then you have a high bias problem. Adding more training data will not (by itself) help much.
On the other hand, high variance problems are indicated by a large gap between the training and validation error curves as training set size increases. You would see a low training error. In this case, the curves are converging and adding more training data would help.
Ref: Web: Intro to Artificial Intelligence - Udacity.