This library contains direct translations of exchange correlation
functionals in libxc to
jax. The core calculations in libxc
are implemented in maple. This gives us
the opportunity to translate them directly into python with the help of
CodeGeneration.
Usage
Installation
pip install jax-xc
Invoking the Functionals
jax_xc’s API is functional: it receives $\rho$ a function of Callable
type, and returns the $\varepsilon_{xc}$ as a function of Callable
type.
$$E_{xc} = \int \rho(r) \varepsilon_{xc}(r) dr$$
LDA and GGA
Unlike libxc which takes pre-computed densities and their derivative
at certain coordinates. In jax_xc, the API is designed to directly
take a density function.
importjaximportjax.numpyasjnpimportjax_xcdefrho(r):
"""Electron number density. We take gaussian as an example. A function that takes a real coordinate, and returns a scalar indicating the number density of electron at coordinate r. Args: r: a 3D coordinate. Returns: rho: If it is unpolarized, it is a scalar. If it is polarized, it is a array of shape (2,). """returnjnp.prod(jax.scipy.stats.norm.pdf(r, loc=0, scale=1))
# create a density functionalgga_xc_pbe=jax_xc.gga_x_pbe(polarized=False)
# a grid point in 3Dr=jnp.array([0.1, 0.2, 0.3])
# pass rho and r to the functional to compute epsilon_xc (energy density) at r.# corresponding to the 'zk' in libxcepsilon_xc_r=gga_xc_pbe(rho, r)
print(epsilon_xc_r)
mGGA
Unlike LDA and GGA that only depends on the density function, mGGA
functionals also depend on the molecular orbitals.
importjaximportjax.numpyasjnpimportjax_xcdefmo(r):
"""Molecular orbital. We take gaussian as an example. A function that takes a real coordinate, and returns the value of molecular orbital at this coordinate. Args: r: a 3D coordinate. Returns: mo: If it is unpolarized, it is a array of shape (N,). If it is polarized, it is a array of shape (N, 2). """# Assume we have 3 molecular orbitalsreturnjnp.array([
jnp.prod(jax.scipy.stats.norm.pdf(r, loc=0, scale=1)),
jnp.prod(jax.scipy.stats.norm.pdf(r, loc=0.5, scale=1)),
jnp.prod(jax.scipy.stats.norm.pdf(r, loc=-0.5, scale=1))
])
rho=lambdar: jnp.sum(mo(r)**2, axis=0)
mgga_xc_cc06=jax_xc.mgga_xc_cc06(polarized=False)
# a grid point in 3Dr=jnp.array([0.1, 0.2, 0.3])
# evaluate the exchange correlation energy per particle at this point# corresponding to the 'zk' in libxcprint(mgga_xc_cc06(rho, r, mo))
Hybrid Functionals
Hybrid functionals expose the same API, with extra attributes for the
users to access parameters needed outside of libxc/jax_xc (e.g. the
fraction of exact exchange).
importjaximportjax.numpyasjnpimportjax_xcdefrho(r):
"""Electron number density. We take gaussian as an example. A function that takes a real coordinate, and returns a scalar indicating the number density of electron at coordinate r. Args: r: a 3D coordinate. Returns: rho: If it is unpolarized, it is a scalar. If it is polarized, it is a array of shape (2,). """returnjnp.prod(jax.scipy.stats.norm.pdf(r, loc=0, scale=1))
hyb_gga_xc_pbeb0=jax_xc.hyb_gga_xc_pbeb0(polarized=False)
# a grid point in 3Dr=jnp.array([0.1, 0.2, 0.3])
# evaluate the exchange correlation energy per particle at this point# corresponding to the 'zk' in libxcprint(hyb_gga_xc_pbeb0(rho, r))
# access to extra attributescam_alpha=hyb_gga_xc_pbep0.cam_alpha# fraction of full Hartree-Fock exchange
The complete list of extra attributes can be found below:
The meaning for each attribute is the same as libxc:
cam_alpha: fraction of full Hartree-Fock exchange, used both for
usual hybrids as well as range-separated ones
cam_beta: fraction of short-range only(!) exchange in range-separated
hybrids
cam_omega: range separation constant
nlc_b: non-local correlation, b parameter
nlc_C: non-local correlation, C parameter
Experimental
We support automatic functional derivative!
importjaximportjax_xcimportautofd.operatorsasofromautofdimportfunctionimportjax.numpyasjnpfromjaxtypingimportArray, Float32@functiondefrho(r: Float32[Array, "3"]) ->Float32[Array, ""]:
"""Electron number density. We take gaussian as an example. A function that takes a real coordinate, and returns a scalar indicating the number density of electron at coordinate r. Args: r: a 3D coordinate. Returns: rho: If it is unpolarized, it is a scalar. If it is polarized, it is a array of shape (2,). """returnjnp.prod(jax.scipy.stats.norm.pdf(r, loc=0, scale=1))
# create a density functionalgga_x_pbe=jax_xc.experimental.gga_x_pbeepsilon_xc=gga_x_pbe(rho)
# a grid point in 3Dr=jnp.array([0.1, 0.2, 0.3])
# pass rho and r to the functional to compute epsilon_xc (energy density) at r.# corresponding to the 'zk' in libxcprint(f"The function signature of epsilon_xc is {epsilon_xc}")
energy_density=epsilon_xc(r)
print(f"epsilon_xc(r) = {energy_density}")
vxc=jax.grad(lambdarho: o.integrate(rho*gga_x_pbe(rho)))(rho)
print(f"The function signature of vxc is {vxc}")
print(vxc(r))
Support Functionals
Please refer to the functionals section
in jax_xc‘s documentation
for the complete list of supported functionals.
Numerical Correctness
We test all the functionals that are auto-generated from maple files
against the reference values in libxc. The test is performed by
comparing the output of libxc and jax_xc and make sure they are
within a certain tolerance, namely atol=2e-10 and rtol=2e-10.
Performance Benchmark
We report the performance benchmark of jax_xc against libxc on a
64-core machine with Intel(R) Xeon(R) Silver 4216 CPU @ 2.10GHz.
We sample the points to evaluate the functionals by varying the number
of points from 1 to $10^7$. The benchmark is performed by evaluating the
runtime of the functional. Note that the runtime of jax_xc is
measured by excluding the time of just-in-time compilation.
We visualize the mean value (averaged for both polarized and unpolarized)
of the runtime of jax_xc and libxc in the following figure. The
y-axis is log-scale.
jax_xc‘s runtime is constantly below libxc‘s
for all batch sizes. The speed up is ranging from 3x to 10x, and it is
more significant for larger batch sizes.
We hypothesize that the reason
for the speed up is that Jax’s JIT compiler is able to optimize the
functionals (e.g. vectorization, parallel execution, instruction fusion,
constant folding for floating points, etc.) better than
libxc.
We visualize the distribution of the runtime ratio of jax_xc and
libxc in the following figure. The ratio is closer to 0.1 for
large batch sizes (~ 10x speed up). The ratio is constantly below 1.0.
Note that, we exclude one datapoint mgga_x_2d_prhg07 from the
runtime ratio visualization because it is an outlier due to Jax’s lack
of support oflamberw function and we use
tensorflow_probability.substrates.jax.math.lambertw.
Caveates
The following functionals from libxc are not available in jax_xc
because some functions are not available in jax.
gga_x_fd_lb94# Becke-Roussel not having a closed-form expressiongga_x_fd_revlb94# Becke-Roussel not having a closed-form expressiongga_x_gg99# Becke-Roussel not having a closed-form expressiongga_x_kgg99# Becke-Roussel not having a closed-form expressionhyb_gga_xc_case21# Becke-Roussel not having a closed-form expressionhyb_mgga_xc_b94_hyb# Becke-Roussel not having a closed-form expressionhyb_mgga_xc_br3p86# Becke-Roussel not having a closed-form expressionlda_x_1d_exponential# Requires explicit 1D integrationlda_x_1d_soft# Requires explicit 1D integrationmgga_c_b94# Becke-Roussel not having a closed-form expressionmgga_x_b00# Becke-Roussel not having a closed-form expressionmgga_x_bj06# Becke-Roussel not having a closed-form expressionmgga_x_br89# Becke-Roussel not having a closed-form expressionmgga_x_br89_1# Becke-Roussel not having a closed-form expressionmgga_x_mbr# Becke-Roussel not having a closed-form expressionmgga_x_mbrxc_bg# Becke-Roussel not having a closed-form expressionmgga_x_mbrxh_bg# Becke-Roussel not having a closed-form expressionmgga_x_mggac# Becke-Roussel not having a closed-form expressionmgga_x_rpp09# Becke-Roussel not having a closed-form expressionmgga_x_tb09# Becke-Roussel not having a closed-form expressiongga_x_wpbeh# jit too long for E1_scaledgga_c_ft97# jit too long for E1_scaledlda_xc_tih# vxc functionalgga_c_pbe_jrgx# vxc functionalgga_x_lb# vxc functional
Building from Source Code
Modify the .env.example to fill in your envrionment variables, then
rename it to .env. Then run source .env to load them into your
shell.
OUTPUT_USER_ROOT: The path to the bazel cache. This is where the
bazel cache will be stored. This is useful if you are building on a
shared filesystem.
MAPLE_PATH: The path to the maple binary.
TMP_INSTALL_PATH: The path to a temporary directory where the
wheel will be installed. This is useful if you are building on a
shared filesystem.
Make sure you have bazel and maple installed. Your python envrionment has installed the dependencies in
requirements.txt.
Once the build finished, the python wheel could be found under bazel-bin/external/jax_xc_repo. For example, the
name for version 0.0.7 is jax_xc-0.0.7-cp310-cp310-manylinux_2_17_x86_64.whl.
Install the python wheel. If needed, specify the install path by
The test could be run without the command above that builds wheel from source, though it might take longer time to
build all the components needed for the test. To run all the test:
The test output could be found in bazel-testlogs/tests/test_impl/test.log for the tests:test_impl and similar to
the others. If you prefer output in command line, add --test_output=all to the above command.
License
Aligned with libxc, jax_xc is licensed under the Mozilla Public License 2.0. See
LICENSE for the full license text.
This project provide an overall exploratory analysis of the World Cup 2022. Based on historical data, I will try to unfold the following insights:
How Qatar, as the WC 2022 host country, perfomed compared to other host country in the past?
Countries that overachieved and underperformed in WC2022 based on historical dominance and recent form
Argentina path to glory compared to France’s in WC2018
Tools implemented: the whole analysis will be conducted in R, particularly the tidyverse package including forcats, ggplot2, lubridate, etc. and some other smaller analysis packages. The environment is RStudio.
Datasets:
The following datasets is being used:
1. World Cup Events:
world_cups.csv: Information World Cup Events since 1935 and countries in the top 4 from Maven Analytics
2. World Cup Matches:
2022_world_cup_matches.csv: Information about world cup 2022 matches from Maven Analytics
world_cup_ matches.csv: Information and results of all world cup matches before 2022 from Maven Analytics
Fifa_world_cup_matches.csv: world cup 2022 match result and stats (note: this data will be used to join with 2022_world_cup_matches to retrieve all the match results for WC 2022)
International Matches
International_Matches.csv: Information about International Matches before WC 2022
World Cup Groups
2022_world_cup_groups.csv: WC 2022 participants along with their group and final standing
Directory
The directory contains the following files and directories:
README.MD: Overview and Summary of the project
R_Script.R: R Code File
R_Script.md: Rendered R Script File to read in Github
‘R_Script_files`
figure-gfm: containing graphs and charts of the whole projects
targetSuchAsPointerEvent: Object (required) (in most cases, pass the received PointerEvent object)
Property name
Description
currentTarget
Target element
clientX
Client x-coordinate of center of ripplet
clientY
Client y-coordinate of center of ripplet
options: Object (optional)
Property name
Default
Description
className
“”
Class name to be set for the ripplet element (not for this library to use, but for user to style that element)
color
“currentColor”
Ripplet color that can be interpreted by browsers. Specify null if the color or image of the ripple effect is based on the CSS className above. If the special value "currentColor" is specified, the text color of the target element (getComputedStyle(currentTarget).color) is used.
Whether to force the origin centered (and ignore clientX and clientY).
appendTo
“auto”
"auto" | "target" | "parent" | CSS selector string like "body". Specify the element to which the ripple effect element will be appended. If "auto" is specified, it will be the target or its closest ancestor that is not an instance of HTMLInputElement, HTMLSelectElement, HTMLTextAreaElement, HTMLImageElement, HTMLHRElement or SVGElement.
If you don’t need detailed control, you can use declarative edition that captures pointerdown events.
Load "ripplet-declarative.js" and add data-ripplet attribute to html elements with/without options.
Elements dynamically appended also have the ripple effect if data-ripplet attribute is available.
In declarative edition, the ripple effect remains until the pointerup or pointerleave event occurs.
targetSuchAsPointerEvent: Object (required) (in most cases, pass the received PointerEvent object)
Property name
Description
currentTarget
Target element
clientX
Client x-coordinate of center of ripplet
clientY
Client y-coordinate of center of ripplet
options: Object (optional)
Property name
Default
Description
className
“”
Class name to be set for the ripplet element (not for this library to use, but for user to style that element)
color
“currentColor”
Ripplet color that can be interpreted by browsers. Specify null if the color or image of the ripple effect is based on the CSS className above. If the special value "currentColor" is specified, the text color of the target element (getComputedStyle(currentTarget).color) is used.
Whether to force the origin centered (and ignore clientX and clientY).
appendTo
“auto”
"auto" | "target" | "parent" | CSS selector string like "body". Specify the element to which the ripple effect element will be appended. If "auto" is specified, it will be the target or its closest ancestor that is not an instance of HTMLInputElement, HTMLSelectElement, HTMLTextAreaElement, HTMLImageElement, HTMLHRElement or SVGElement.
If you don’t need detailed control, you can use declarative edition that captures pointerdown events.
Load "ripplet-declarative.js" and add data-ripplet attribute to html elements with/without options.
Elements dynamically appended also have the ripple effect if data-ripplet attribute is available.
In declarative edition, the ripple effect remains until the pointerup or pointerleave event occurs.
Customized LinkedIn Profile to JSON Resume Browser Tool
🖼️ This is a slightly tweaked version of the LinkedIn to JSON Resume Chrome Extension. That project is outdated because it isn’t using the latest version of JSON Schema. Furthermore, I have customized that schema myself, so I have to base this Chrome extension off of my own schema.
Build
npm install
Make a code change and then run npm run build-browserext, which will generate files in ./build-browserext.
npm run package-browserext will side-load the build as a ZIP in webstore-zips directory.
If you want to do something else besides side-loading, read the original README.
Usage
For local use:
npm run package-browserext will side-load the build as a ZIP in webstore-zips directory.
In Chrome, go to chrome://extensions then drag-n-drop the ZIP onto the browser. Note that developer mode must be turned on.
Go to your LinkedIn profile, i.e. www.linkedin.com/in/anthonydellavecchia and click on LinkedIn Profile to JSON button.
After a second or two, JSON will be generated. Copy this, as it is a raw/pre-transformation version.
Note that in the Chrome Extension, you can select either the custom version of the JSON schema that I created, or the last stable build from v0.0.16 (mine is based on v1.0.0).
Design
browser-ext/popup.html holds the HTML for the Chrome Extension.
jsonresume.scheama.latest.ts is the latest schema from JSON Resume Schema (v1.0.0).
jsonresume.scheama.stable.ts is the stable but very outdated schema from JSON Resume Schema (v0.0.16).
src/main.js holds most of the JavaScript to get and transform data from LinkedIn.
src/templates.js holds the templates for the schema.
Click to expand README.md of the source repository!
An extremely easy-to-use browser extension for exporting your full LinkedIn Profile to a JSON Resume file or string.
Usage / Installation Options:
There are (or were) a few different options for how to use this:
Feel free to install, use, and then immediately uninstall if you just need a single export
No data is collected
[Deprecated] (at least for now): Bookmarklet
This was originally how this tool worked, but had to be retired as a valid method when LinkedIn added a stricter CSP that prevented it from working
Code to generate the bookmarklet is still in this repo if LI ever loosens the CSP
Schema Versions
This tool supports multiple version of the JSON Resume Schema specification for export, which you can easily swap between in the dropdown selector! ✨
“Which schema version should I use?”
If you are unsure, you should probably just stick with “stable”, which is the default. It should have the most widespread support across the largest number of platforms.
Support for Multilingual Profiles
LinkedIn has a unique feature that allows you to create different versions of your profile for different languages, rather than relying on limited translation of certain fields.
For example, if you are bilingual in both English and German, you could create one version of your profile for each language, and then viewers would automatically see the correct one depending on where they live and their language settings.
I’ve implemented support (starting with v1.0.0) for multilingual profile export through a dropdown selector:
The dropdown should automatically get populated with the languages that the profile you are currently viewing supports, in addition to your own preferred viewing language in the #1 spot. You should be able to switch between languages in the dropdown and click the export button to get a JSON Resume export with your selected language.
Note: LinkedIn offers language choices through a Locale string, which is a combination of country (ISO-3166) and language (ISO-639). I do not make decisions as to what languages are supported.
This feature is the part of this extension most likely to break in the future; LI has some serious quirks around multilingual profiles – see my notes for details.
Export Options
There are several main buttons in the browser extension, with different effects. You can hover over each button to see the alt text describing what they do, or read below:
LinkedIn Profile to JSON: Converts the profile to the JSON Resume format, and then displays it in a popup modal for easy copying and pasting
Download JSON Resume Export: Same as above, but prompts you to download the result as an actual .json file.
Download vCard File: Export and download the profile as a Virtual Contact File (.vcf) (aka vCard)
There are some caveats with this format; see below
vCard Limitations and Caveats
Partial birthdate (aka BDAY) values (e.g. where the profile has a month and day, but has not opted to share their birth year), are only supported in v4 (RFC-6350) and above. This extension currently only supports v3, so in these situations the tool will simply omit the BDAY field from the export
The LinkedIn display photo (included in vCard) served by LI is a temporary URL, with a fixed expiration date set by LinkedIn. From observations, this is often set months into the future, but could still be problematic for address book clients that don’t cache images. To work around this, I’m converting it to a base64 string; this should work with most vCard clients, but also increases the vCard file size considerably.
Chrome Side-loading Instructions
Instead of installing from the Chrome Webstore, you might might want to “side-load” a ZIP build for either local development, or to try out a new release that has not yet made it through the Chrome review process. Here are the instructions for doing so:
Find the ZIP you want to load
If you want to side-load the latest version, you can download a ZIP from the releases tab
If you want to side-load a local build, use npm run package-browserext to create a ZIP
Go to Chrome’s extension setting page (chrome://extensions)
Turn on developer mode (upper right toggle switch)
Drag the downloaded zip to the browser to let it install
Test it out, then uninstall
You can also unpack the ZIP and load it as “unpacked”.
Troubleshooting
When in doubt, refresh the profile page before using this tool.
Troubleshooting – Debug Log
If I’m trying to assist you in solving an issue with this tool, I might have you share some debug info. Currently, the easiest way to do this is to use the Chrome developer’s console:
Append ?li2jr_debug=true to the end of the URL of the profile you are on
Open Chrome dev tools, and specifically, the console (instructions)
Run the extension (try to export the profile), and then look for red messages that show up in the console (these are errors, as opposed to warnings or info logs).
You can filter to just error messages, in the filter dropdown above the console.
Updates:
Update History (Click to Show / Hide)
Date
Release
Notes
2/27/2021
2.1.2
Fix: Multiple issues around work history / experience; missing titles, ordering, etc. Overhauled approach to extracting work entries.
12/19/2020
2.1.1
Fix: Ordering of work history with new API endpoint (#38)
12/7/2020
2.1.0
Fix: Issue with multilingual profile, when exporting your own profile with a different locale than your profile’s default. (#37)
Fix: Incorrect birthday month in exported vCards (off by one) Fix: Better pattern for extracting profile ID from URL, fixes extracting from virtual sub-pages of profile (e.g. /detail/contact-info), or with query or hash strings at the end.
7/7/2020
1.4.2
Fix: For work positions, if fetched via profilePositionGroups, LI ordering (the way it looks on your profile) was not being preserved.
7/31/2020
1.4.1
Fix: In some cases, wrong profileUrnId was extracted from current profile, which led to work history API call being ran against a different profile (e.g. from “recommended section”, or something like that).
7/21/2020
1.4.0
Fix: For vCard exports, Previous profile was getting grabbed after SPA navigation between profiles.
7/6/2020
1.3.0
Fix: Incomplete work position entries for some users; LI was limiting the amount of pre-fetched data. Had to implement request paging to fix. Also refactored a lot of code, improved result caching, and other tweaks.
6/18/2020
1.2.0
Fix / Improve VCard export feature.
6/5/2020
1.1.0
New feature: vCard export, which you can import into Outlook / Google Contacts / etc.
5/31/2020
1.0.0
Brought output up to par with “spec”, integrated schemas as TS, added support for multilingual profiles, overhauled JSDoc types. Definitely a breaking change, since the output has changed to mirror schema more closely (biggest change is website in several spots has become url)
5/9/2020
0.0.9
Fixed “references”, added certificates (behind setting), and formatting tweaks
4/4/2020
0.0.8
Added version string display to popup
4/4/2020
0.0.7
Fixed and improved contact info collection (phone, Twitter, and email). Miscellaneous other tweaks.
10/22/2019
0.0.6
Updated recommendation querySelector after LI changed DOM. Thanks again, @ lucbpz.
10/19/2019
0.0.5
Updated LI date parser to produce date string compliant with JSONResume Schema (padded). Thanks @ lucbpz.
9/12/2019
0.0.4
Updated Chrome webstore stuff to avoid LI IP usage (Google took down extension page due to complaint). Updated actual scraper code to grab full list of skills vs just highlighted.
8/3/2019
NA
Rewrote this tool as a browser extension instead of a bookmarklet to get around the CSP issue. Seems to work great!
7/22/2019
NA
ALERT: This bookmarklet is currently broken, thanks to LinkedIn adding a new restrictive CSP (Content Security Policy) header to the site. I’ve opened an issue to discuss this, and both short-term (requires using the console) and long-term (browser extension) solutions.
6/21/2019
0.0.3
I saw the bookmarklet was broken depending on how you came to the profile page, so I refactored a bunch of code and found a much better way to pull the data. Should be much more reliable!
What is JSON Resume?
“JSON Resume” is an open-source standard / schema, currently gaining in adoption, that standardizes the content of a resume into a shared underlying structure that others can use in automated resume formatters, parsers, etc. Read more about it here, or on GitHub.
What is this tool?
I made this because I wanted a way to quickly generate a JSON Resume export from my LinkedIn profile, and got frustrated with how locked down the LinkedIn APIs are and how slow it is to request your data export (up to 72 hours). “Install” the tool to your browser, then click to run it while looking at a LinkedIn profile (preferably your own), and my code will grab the various pieces of information off the page and then show a popup with the full JSON resume export that you can copy and paste to wherever you would like.
Development
With the rewrite to a browser extension, I actually configured the build scripts to be able to still create a bookmarklet from the same codebase, in case the bookmarklet ever becomes a viable option again.
Building the browser extension
npm run build-browserext will transpile and copy all the right files to ./build-browserext, which you can then side-load into your browser. If you want to produce a single ZIP archive for the extension, npm run package-browserext will do that.
Use build-browserext-debug for a source-map debug version. To get more console output, append li2jr_debug=true to the query string of the LI profile you are using the tool with.
The bookmark can then be dragged to your bookmarks from the final build/install-page.html
All of the above should happen automatically when you do npm run build-bookmarklet.
If this ever garners enough interest and needs to be updated, I will probably want to re-write it with TypeScript to make it more maintainable.
LinkedIn Documentation
For understanding some peculiarities of the LI API, see LinkedIn-Notes.md.
Debugging
Debugging the extension is a little cumbersome, because of the way Chrome sandboxes extension scripts and how code has to be injected. An alternative to setting breakpoints in the extension code itself, is to copy the output of /build/main.js and run it via the console.
Even if you have the repo inside of a local static server, you can’t inject it via a script tag or fetch & eval, due to LI’s restrictive CSP.
If you do want to find the actual injected code of the extension in Chrome dev tools, you should be able to find it under Sources -> Content Scripts -> top -> JSON Resume Exporter -> {main.js}
Debugging Snippets
Helpful snippets (subject to change; these rely heavily on internals):
// Get main profileDB (after running extension)varprofileRes=awaitliToJrInstance.getParsedProfile(true);varprofileDb=awaitliToJrInstance.internals.buildDbFromLiSchema(profileRes.liResponse);
DISCLAIMER:
This tool is not affiliated with LinkedIn in any manner. Intended use is to export your own profile data, and you, as the user, are responsible for using it within the terms and services set out by LinkedIn. I am not responsible for any misuse, or repercussions of said misuse.
The scripts in this project are designed to ease the creation of LVM-enabled Enterprise Linux AMIs for use in AWS envrionments. It has been successfully tested with CentOS 7.x, Scientific Linux 7.x and Red Hat Enterprise Linux 7.x. It should work with other EL7-derived operating systems.
Note: The scripts can also be used to generate bootstrap and/or recovery AMIs: non-LVMed AMIs intended to help generate the LVM-enabled AMIs or recover LVM-enabled instances. However, this functionality is only lightly tested. It is known to produce CentOS 7.x AMIs suitable for bootstrapping. It is also known to not produce RHEL 7.x AMIs suitable for bootstrapping. As this is not the scripts’ primary use-case, documentation for such is not included (though it should be easy enough for an experienced EL7 adminstrator to figure out from reading the scripts’ contents).