Commit 93db7c02 authored by Michael Rudolf's avatar Michael Rudolf

ML-workshop preparation

 - Removed outdated Notebooks
 - Added a short documentation to illustrate data import/export
 - Rewrote the feature generation
 - Projects now fully work
 - A few other updates and format changes...
parent c43289ac
# Stick Slip Learning
Suite of scripts to analyze annular shear experiments with a machine learning approach. From a series of experiments at different conditions, specific segments are extracted, features generated and then used as input for a machine learning algorithm. For the terms used and a short explanation of what to expect from the data see [Terminology](https://gitext.gfz-potsdam.de/analab-code/shear-madness/blob/master/Terminology.md).
Suite of scripts to analyze annular shear experiments with a machine learning
approach. From a series of experiments at different conditions, specific
segments are extracted, features generated and then used as input for a machine
learning algorithm. For the terms used and a short explanation of what to
expect from the data see
[Terminology](https://gitext.gfz-potsdam.de/analab-code/shear-madness/blob/master/Terminology.md).
## Quick Guide for ML-Workshop participants
The majority of scripts in this repository are concerned with the
data-preparation _before_ the actual machine learning part. If you want to run
your own feature generation pipeline please directly ask @mrudolf to provide
you with the raw or pre-processed data files. These are also available in other
formats if needed as long as a Python module exists for conversion.
To see a sample implementation of `feature_generation.py` have a look in `src/set-shear-madness.py`.
## Requirements
The scripts require [Python 3(external link!)](https://python.org) to run. The
required external libraries can be found in
[requirements.txt](https://gitext.gfz-potsdam.de/analab-code/shear-madness/blob/master/requirements.txt).
If not noted otherwise the most recent version of each module or Python at the
time of the commit is used. Older versions might work but remain untested.
## Overview
All relevant scripts are located in the [src](https://gitext.gfz-potsdam.de/analab-code/shear-madness/blob/master/src) directory it contains two 'master' scripts that show a full processing pipeline either for a single set or for multiple sets within a single folder. The scripts use several [modules](https://gitext.gfz-potsdam.de/analab-code/shear-madness/blob/master/src/modules) representing different stages of processing:
All relevant scripts are located in the
[src](https://gitext.gfz-potsdam.de/analab-code/shear-madness/blob/master/src)
directory it contains two 'master' scripts that show a full processing pipeline
either for a single set or for multiple sets within a single folder. The
scripts use several
[modules](https://gitext.gfz-potsdam.de/analab-code/shear-madness/blob/master/src/modules)
representing different stages of processing:
1. Data Preparation > `preparation.py`
_Splits the raw data into smaller sets of equal loading rate. Afterwards even smaller subsets are generated and certain subsets are omitted according to certain limitations. The subsets now contain an equal amount of samples and a certain number of events._
_Splits the raw data into smaller sets of equal loading rate. Afterwards
even smaller subsets are generated and certain subsets are omitted
according to certain limitations. The subsets now contain an equal amount
of samples and a certain number of events._
2. Feature Generation > `feature_functions.py` and `feature_generation.py`
_Generates features with the functions implemented in `feature_functions.py`. The output is going to be a 2D feature array `X` with a column for each feature and row for each sample and a one-dimensional array containing the data labels. You can add new feature functions to `feature_functions.py` following the instructions therein._
_Generates features with the functions implemented in
`feature_functions.py`. The output is going to be a 2D feature array `X`
with a column for each feature and row for each sample and a
one-dimensional array containing the data labels. You can add new feature
functions to `feature_functions.py` following the instructions therein._
3. Learning > `learning.py`
___To be implemented!__ Uses the chosen machine learning model (from scikit-learn) to fit the labeled data._
___To be implemented!__ Uses the chosen machine learning model (from
scikit-learn) to fit the labeled data._
The other modules contain helper functions for easy file handling and saving the current stage of processing etc.
The other modules contain helper functions for easy file handling and saving
the current stage of processing etc.
## Documentation
Because most of the work is outsourced into modules the two 'master' scripts and the comments inside them should provide enough documentation to assess the project pipeline. A more in-depth documentation of what the functions do including more comments on the source code is given in the form of jupyter notebooks in [Notebooks](https://gitext.gfz-potsdam.de/analab-code/shear-madness/tree/master/notebooks). Because GitLabs supports the display of jupyter notebooks very well they can be viewed online like a normal documentation. If you want to follow the individual steps and actually run the jupyter notebooks you have to make sure to install [jupyter (external link!)](https://jupyter.org) on your machine and run a jupyter notebook server. In some cases you need to place them into the src folder to properly pick up the modules. They are not required to run the main scripts.
Because most of the work is outsourced into modules the two 'master' scripts
and the comments inside them should provide enough documentation to assess the
project pipeline. A more in-depth documentation of what the functions do
including more comments on the source code is given in the form of jupyter
notebooks in
[Notebooks](https://gitext.gfz-potsdam.de/analab-code/shear-madness/tree/master/notebooks).
Because GitLabs supports the display of jupyter notebooks very well they can be
viewed online like a normal documentation. If you want to follow the individual
steps and actually run the jupyter notebooks you have to make sure to install
[jupyter (external link!)](https://jupyter.org) on your machine and run a
jupyter notebook server. In some cases you need to place them into the src
folder to properly pick up the modules. They are not required to run the main
scripts.
## Requirements
The scripts require [Python 3(external link!)](https://python.org) to run. The required external libraries can be found in [requirements.txt](https://gitext.gfz-potsdam.de/analab-code/shear-madness/blob/master/requirements.txt). If not noted otherwise the most recent version of each module or python at the time of the commit is used. Older versions might work but remain untested.
## Acknowledgements
The software in this repository has benefited from contributions by:
- J. Bedford (@jbed)
This research has been partially funded by Deutsche Forschungsgemeinschaft
(DFG) through grant [CRC 1114 "Scaling Cascades in Complex Systems", Project B01
"Fault networks and scaling properties of deformation accumulation" (external
link!)](www.sfb1114.de).
- Add some kind of config which defines the window sizes etc...
......@@ -6,13 +6,14 @@ Terms used to describe the dataset itself.
|Term |Explanation
|--- |---
|Experiment |A file containing the raw measurement data including some intermediate processing results. The usual format is HDF5 (*e.g. b\_5kPa\_371-01-27-GB300.h5*) and its contents can be visualized by using `preparation.show_h5_contents()`.
|Experiment |A file containing the raw measurement data including some intermediate processing results. The usual format is HDF5 (*e.g. b\_5kPa\_371-01-27-GB300.h5*) and its contents can be visualized by using `utils.show_h5_contents()`.
|Set |Extracted data from an `Experiment` which has been taken at constant normal load and loading velocity. A set contains complete time series data from two `Channels` (`friction` and `lid displacement`), the respective average `normal load`, and `loading velocity`. Because the duration of each velocity step is different but load point displacement is constant, the length of a set is different depending on the current velocity. During data preparation several of these sets are created and saved in HDF5 format.
|Subset |A sliced version of a `Set` with the same properties but a smaller amount of samples. During data preparation a subset is generated from each set according to several prerequisites and saved in HDF5 format.
|Channel |A channel is a time series of measured data points, usually in the form of an one-dimensional array. The channels that are recorded by the testing machine are: `loading velocity`, `shear stress`, `normal stress`, and `lid displacement`. The usual acquisition frequency is 625Hz. For advanced processing the channel data sometimes is converted into a different framework, e.g. the `shear stress` is converted to non-dimensional `friction`.
|Sample |A single measurement point in a `Channel`.
|Window |A small extracted slice of data from a `Channel` over a small amount of samples. The window size may vary depending on the task and available memory.
|Step(size) |Distance in samples between individual `Windows`. When `step size == window size` then the windows follow each other without overlap. If `step size < window size` then there is a certain overlap between the individual windows.
|Step fraction |Ratio between `Window` and `Step(size)`.
|Feature |A specific variable calculated in each window which is used as an input for the machine learning algorithm.
## Setup
......
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Workflow\n",
"1. Take window\n",
"2. Detrend\n",
"3. Filter\n",
"4. Feature"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Import the necessary modules\n",
"import importlib\n",
"import h5py\n",
"import numpy as np\n",
"import os\n",
"import shutil\n",
"import matplotlib.pyplot as plt\n",
"import inspect\n",
"import multiprocessing as mp\n",
"from multiprocessing import Pool\n",
"\n",
"import filters\n",
"import feature_functions as ffc\n",
"\n",
"importlib.reload(ffc)\n",
"importlib.reload(filters)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Functions to extract windows and create features"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def extract_windows(data, window, step_frac, eqs):\n",
" '''\n",
" Extracts windows of data and gives it back as a list of arrays. Also\n",
" generates a numpy array with labels that have been found using peak\n",
" detection.\n",
" '''\n",
"\n",
" # We do a list comprehension to create small windows of data\n",
" step = round(window*step_frac)\n",
" num_its = int(np.floor(len(data)/(window*step_frac)))\n",
" win_data = [data[i:i+step] for i in range(0, num_its*window, step)]\n",
"\n",
" # The labels are generated from an array of the same length as the data.\n",
" labels = np.zeros_like(data)\n",
" # Using binary format makes it easy to add up\n",
" # multiple labels just in case there are multiple in one window\n",
" leg_eqs = {'eqi': int('00001', 2),\n",
" 'eqd': int('00010', 2),\n",
" 'eqm': int('00100', 2),\n",
" 'eqf': int('01000', 2),\n",
" 'eqe': int('10000', 2)}\n",
" # In the array we replace the zeros with the binary label of the event\n",
" for eq in eqs:\n",
" for i in eqs[eq]:\n",
" if not np.isnan(i):\n",
" try:\n",
" labels[int(i)] = leg_eqs[eq]\n",
" except IndexError:\n",
" pass\n",
" # Then we add up all labels in a window, to be able to combine events\n",
" label_data = [np.sum(labels[i:i+step]) for i in range(0,\n",
" num_its*window,\n",
" step)]\n",
"\n",
" return (win_data, label_data)\n",
"\n",
"\n",
"def create_features(window, f_list):\n",
" ''' Uses the functions given in f_list to calculate features '''\n",
" features = np.zeros(len(f_list))\n",
" for (i, fnc) in enumerate(f_list):\n",
" features[i] = fnc[1](window)\n",
" return features\n",
"\n",
" \n",
"def _hdf5group_to_dict(h5group):\n",
" '''\n",
" Returns a dictionary with each element in the h5group.\n",
" '''\n",
" out_dict = dict()\n",
" for dset in h5group.keys():\n",
" out_dict[dset] = h5group[dset][()]\n",
" return out_dict\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 1. Initiate location and extract windows"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"# List of file paths, depending on where I run the script\n",
"file_paths = {\n",
" 'home_office': 'C:/Users/Michael/ownCloud/DocStelle/GitRepos/shear-madness/1-feature-generation/ExampleData/b_5kPa_371-01-27-GB300_subsets/',\n",
" 'lab': 'C:/Users/M.Rudolf/ownCloud/DocStelle/GitRepos/shear-madness/1-feature-generation/ExampleData/b_5kPa_371-01-27-GB300_subsets/',\n",
" 'office': '/home/mrudolf/ownCloud/DocStelle/GitRepos/shear-madness/1-feature-generation/ExampleData/b_5kPa_371-01-27-GB300_subsets/',\n",
" 'office2': '/home/mrudolf/Documents/py_allSets_ML/'\n",
"}\n",
"# Location where the files are\n",
"file_path = file_paths['office2']\n",
"\n",
"# Parameters\n",
"window = 30\n",
"step_frac = 1\n",
"min_cycles = 5\n",
"min_win = 10\n",
"\n",
"win_data = []\n",
"label_data = []\n",
"# Iterate over files and create windowed data\n",
"file_list = [f for f in os.listdir(file_path) if f.endswith('.h5')]\n",
"for (i, file) in enumerate(file_list):\n",
" with h5py.File(file_path+file) as hf:\n",
" shear = hf['shear']\n",
" eqs = _hdf5group_to_dict(hf['eqs'])\n",
" (windows, labels) = extract_windows(shear, window, step_frac, eqs)\n",
" win_data.extend(windows)\n",
" label_data.extend(labels)\n",
" print('%6i windows after %2i files.' % (len(win_data),i+1))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 2. Filter the data using 'filters'"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"win_filt = [filters.filter_data(window, 60, 625) for window in win_data]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 3. Create Features"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"%%time\n",
"importlib.reload(ffc)\n",
"# Look into feature_functions to get a list of functions to process with\n",
"f_list = inspect.getmembers(ffc, inspect.isfunction)\n",
"fnames = [entry[0].replace('do_','') for entry in f_list]\n",
"features = [create_features(window, f_list) for window in win_data]\n",
"X = np.array(features)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"X.shape"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"X = np.array(features)\n",
"Y = np.array(label_data)\n",
"X.shape"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"folder = os.path.split(file_path)[0]\n",
"(out_folder, out_file) = os.path.split(folder)\n",
"out_file = out_file.replace('_subsets', '_features.h5')\n",
"asciiList = [n.encode(\"ascii\", \"ignore\") for n in fnames]\n",
"\n",
"with h5py.File(out_folder+'/'+out_file, 'w') as out_hf:\n",
" out_hf.create_dataset('X', data=X, compression='gzip')\n",
" out_hf.create_dataset('Y', data=Y, compression='gzip')\n",
" out_hf.create_dataset('feature_names', data=asciiList)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"out_folder+'/'+out_file\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"fnames"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import h5py\n",
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"from sklearn.preprocessing import StandardScaler"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"file = '/home/mrudolf/ownCloud/DocStelle/GitRepos/shear-madness/2-learning/ExampleData/b_5kPa_371-01-27-GB300_features.h5'\n",
"with h5py.File(file,'r') as hf:\n",
" X = hf['X'][()]\n",
" Y = hf['Y'][()]\n",
" feature_names = hf['feature_names'][()]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%matplotlib notebook\n",
"plt.imshow(X, aspect=0.0001)\n",
"plt.colorbar()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"scaler = StandardScaler()\n",
"X_scaled = scaler.fit_transform(X)\n",
"%matplotlib notebook\n",
"plt.imshow(X_scaled, aspect=0.0001)\n",
"plt.colorbar()\n",
"plt.clim([-1, 1])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"fig, ax = plt.subplots()\n",
"i = 1\n",
"ax.plot(X_scaled[:,i])\n",
"ax.set_title(feature_names[i])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"C = np.corrcoef(X_scaled.T)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"plt.imshow(C)\n",
"plt.colorbar()\n",
"plt.clim([-1,1])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"feature_names"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
{
"cells": [
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"<module 'feature_generation' from '/home/mrudolf/ownCloud/DocStelle/GitRepos/shear-madness/1-feature-generation/feature_generation.py'>"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import feature_generation as ftg\n",
"import importlib\n",
"importlib.reload(ftg)\n"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 40.9 s, sys: 113 ms, total: 41 s\n",
"Wall time: 40.9 s\n"
]
}
],
"source": [
"%%time\n",
"# List of file paths, depending on where I run the script\n",
"file_paths = {\n",
" 'home_office': 'C:/Users/Michael/ownCloud/DocStelle/GitRepos/shear-madness/1-feature-generation/ExampleData/b_5kPa_371-01-27-GB300_subsets/',\n",
" 'lab': 'C:/Users/M.Rudolf/ownCloud/DocStelle/GitRepos/shear-madness/1-feature-generation/ExampleData/b_5kPa_371-01-27-GB300_subsets/',\n",
" 'office': '/home/mrudolf/ownCloud/DocStelle/GitRepos/shear-madness/1-feature-generation/ExampleData/b_5kPa_371-01-27-GB300_subsets/'\n",
"}\n",
"# Location where the files are\n",
"file_path = file_paths['office']\n",
"\n",
"ftg.run(file_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import h5py\n",
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"from sklearn.preprocessing import StandardScaler\n",
"\n",
"file = '/home/mrudolf/ownCloud/DocStelle/GitRepos/shear-madness/2-learning/ExampleData/b_5kPa_371-01-27-GB300_features.h5'\n",
"with h5py.File(file,'r') as hf:\n",
" X = hf['X'][()]\n",
" Y = hf['Y'][()]\n",
" feature_names = hf['feature_names'][()]\n",
"\n",
"scaler = StandardScaler()\n",
"X_scaled = scaler.fit_transform(X[:,0:3])\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Unsupervised Learning\n",
"## Kmeans"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.cluster import KMeans, DBSCAN\n",
"\n",
"mdl = KMeans(\n",
" n_clusters=5\n",
")\n",
"\n",
"preds = mdl.fit_predict(X_scaled)\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%matplotlib notebook\n",
"plt.figure()\n",
"uni_preds = np.unique(preds)\n",
"for uni in uni_preds:\n",
" plt.plot(X_scaled[preds==uni,0], X_scaled[preds==uni,2],'.')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Supervised Learning"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"[np.mean(Y==i) for i in np.unique(Y)]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"y = np.zeros_like(Y)\n",
"y[Y==1] = 1\n",
"[np.mean(y==i) for i in np.unique(y)]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.model_selection import train_test_split\n",
"\n",
"X_train, X_test, y_train, y_test = train_test_split(X_scaled, y,\n",
" test_size=0.5,\n",
" shuffle=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Random Forest Classifier"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.ensemble import RandomForestClassifier\n",
"from sklearn.neural_network import MLPClassifier\n",
"\n",
"mdl = RandomForestClassifier()\n",
"\n",
"mdl.fit(X_train, y_train)\n",
"y_test_preds = mdl.predict(X_test)\n",
"y_train_preds = mdl.predict(X_train)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"fig, (ax1,ax2) = plt.subplots(ncols=2)\n",
"ax1.plot(X_train[y_train==0,0], X_train[y_train==0,1], 'x')\n",
"ax1.plot(X_train[y_train==1,0], X_train[y_train==1,1], 'x')\n",
"\n",
"ax2.plot(X_train[y_train_preds==0,0], X_train[y_train_preds==0,1], 'o')\n",
"ax2.plot(X_train[y_train_preds==1,0], X_train[y_train_preds==1,1], 'o')\n",
"\n",
"accur = np.mean(y_train==y_train_preds)\n",
"random = 1-np.mean(y_train)\n",
"print(accur-random)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"fig, (ax1,ax2) = plt.subplots(ncols=2)\n",
"ax1.plot(X_test[y_test==0,0], X_test[y_test==0,1], 'x')\n",
"ax1.plot(X_test[y_test==1,0], X_test[y_test==1,1], 'x')\n",
"\n",
"ax2.plot(X_test[y_test_preds==0,0], X_test[y_test_preds==0,1], 'o')\n",
"ax2.plot(X_test[y_test_preds==1,0], X_test[y_test_preds==1,1], 'o')\n",
"\n",
"accur = np.mean(y_test==y_test_preds)\n",
"random = 1-np.mean(y_test)\n",
"print(accur-random)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"