Category: Blog

  • microcks-testcontainers-go-demo

    Microcks Testcontainers Go Demo

    Microcks Testcontainers Go demo

    This application is a demonstration on how to integrate Microcks via Testcontainers within your development inner-loop.

    You will work with a Go application and explore how to:

    • Use Microcks for provisioning third-party API mocks,
    • Use Microcks for simulating external Kafka events publishers,
    • Write tests using Microcks contract-testing features for both REST/OpenAPI based APIs and Events/AsyncAPI based messaging

    Table of contents

    License Summary

    The code in this repository is made available under the MIT license. See the LICENSE file for details.

    Running tests

    PollInterval:100ms skipInternalCheck:false} 2024/09/25 22:05:00 🔔 Container is ready: eb0f86cd914d 2024/09/25 22:05:00 🐳 Creating container for image quay.io/microcks/microcks-uber:1.9.0-native 2024/09/25 22:05:00 ✅ Container created: a7f410f91fb2 2024/09/25 22:05:00 🐳 Starting container: a7f410f91fb2 2024/09/25 22:05:01 ✅ Container started: a7f410f91fb2 2024/09/25 22:05:01 ⏳ Waiting for container id a7f410f91fb2 image: quay.io/microcks/microcks-uber:1.9.0-native. Waiting for: &{timeout: Log:Started MicrocksApplication IsRegexp:false Occurrence:1 PollInterval:100ms} 2024/09/25 22:05:01 🔔 Container is ready: a7f410f91fb2 2024/09/25 22:05:01 🐳 Terminating container: a7f410f91fb2 2024/09/25 22:05:01 🚫 Container terminated: a7f410f91fb2 — PASS: TestListPastries (1.27s) PASS ok github.com/microcks/microcks-testcontainers-go-demo/internal/client 1.586s’>
    $ go test -timeout 30s -run "^TestListPastries$" ./internal/client -v
    
    === RUN   TestListPastries
    2024/09/25 22:05:00 github.com/testcontainers/testcontainers-go - Connected to docker: 
      Server Version: 24.0.2
      API Version: 1.43
      Operating System: Docker Desktop
      Total Memory: 11962 MB
      Testcontainers for Go Version: v0.34.0
      Resolved Docker Host: unix:///var/run/docker.sock
      Resolved Docker Socket Path: /var/run/docker.sock
      Test SessionID: 96332d7af971d08d478592eca13f0c15f30f89ee17251b870f732595e9f5f341
      Test ProcessID: bfbf6820-0924-4e6a-b6f9-9eeb5f677ac5
    2024/09/25 22:05:00 🐳 Creating container for image testcontainers/ryuk:0.9.0
    2024/09/25 22:05:00 ✅ Container created: eb0f86cd914d
    2024/09/25 22:05:00 🐳 Starting container: eb0f86cd914d
    2024/09/25 22:05:00 ✅ Container started: eb0f86cd914d
    2024/09/25 22:05:00 ⏳ Waiting for container id eb0f86cd914d image: testcontainers/ryuk:0.9.0. Waiting for: &{Port:8080/tcp timeout:<nil> PollInterval:100ms skipInternalCheck:false}
    2024/09/25 22:05:00 🔔 Container is ready: eb0f86cd914d
    2024/09/25 22:05:00 🐳 Creating container for image quay.io/microcks/microcks-uber:1.9.0-native
    2024/09/25 22:05:00 ✅ Container created: a7f410f91fb2
    2024/09/25 22:05:00 🐳 Starting container: a7f410f91fb2
    2024/09/25 22:05:01 ✅ Container started: a7f410f91fb2
    2024/09/25 22:05:01 ⏳ Waiting for container id a7f410f91fb2 image: quay.io/microcks/microcks-uber:1.9.0-native. Waiting for: &{timeout:<nil> Log:Started MicrocksApplication IsRegexp:false Occurrence:1 PollInterval:100ms}
    2024/09/25 22:05:01 🔔 Container is ready: a7f410f91fb2
    2024/09/25 22:05:01 🐳 Terminating container: a7f410f91fb2
    2024/09/25 22:05:01 🚫 Container terminated: a7f410f91fb2
    --- PASS: TestListPastries (1.27s)
    PASS
    ok      github.com/microcks/microcks-testcontainers-go-demo/internal/client     1.586s
    $ go test ./internal/test -test.timeout=20m -failfast -v -test.run TestBaseSuite -testify.m ^TestOpenAPIContractAdvanced
    
    $ go test ./internal/test -test.timeout=20m -failfast -v -test.run TestBaseSuite -testify.m ^TestPostmanCollectionContract
    
    $ go test ./internal/test -test.timeout=20m -failfast -v -test.run TestBaseSuite -testify.m ^TestOrderEventIsPublishedWhenOrderIsCreated
    
    $ go test ./internal/test -test.timeout=20m -failfast -v -test.run TestBaseSuite -testify.m ^TestEventIsConsumedAndProcessedByService
    Visit original content creator repository https://github.com/microcks/microcks-testcontainers-go-demo
  • jax_xc

    JAX Exchange Correlation Library

    http://dancia.top/wp-content/uploads/2025/08/logo.png

    http://dancia.top/wp-content/uploads/2025/08/1756576628_600_

    This library contains direct translations of exchange correlation functionals in libxc to jax. The core calculations in libxc are implemented in maple. This gives us the opportunity to translate them directly into python with the help of CodeGeneration.

    Usage

    Installation

    pip install jax-xc

    Invoking the Functionals

    jax_xc’s API is functional: it receives $\rho$ a function of Callable type, and returns the $\varepsilon_{xc}$ as a function of Callable type.

    $$E_{xc} = \int \rho(r) \varepsilon_{xc}(r) dr$$

    LDA and GGA

    Unlike libxc which takes pre-computed densities and their derivative at certain coordinates. In jax_xc, the API is designed to directly take a density function.

    import jax
    import jax.numpy as jnp
    import jax_xc
    
    
    def rho(r):
      """Electron number density. We take gaussian as an example.
    
      A function that takes a real coordinate, and returns a scalar
      indicating the number density of electron at coordinate r.
    
      Args:
      r: a 3D coordinate.
      Returns:
      rho: If it is unpolarized, it is a scalar.
          If it is polarized, it is a array of shape (2,).
      """
      return jnp.prod(jax.scipy.stats.norm.pdf(r, loc=0, scale=1))
    
    # create a density functional
    gga_xc_pbe = jax_xc.gga_x_pbe(polarized=False)
    
    # a grid point in 3D
    r = jnp.array([0.1, 0.2, 0.3])
    
    # pass rho and r to the functional to compute epsilon_xc (energy density) at r.
    # corresponding to the 'zk' in libxc
    epsilon_xc_r = gga_xc_pbe(rho, r)
    print(epsilon_xc_r)

    mGGA

    Unlike LDA and GGA that only depends on the density function, mGGA functionals also depend on the molecular orbitals.

    import jax
    import jax.numpy as jnp
    import jax_xc
    
    
    def mo(r):
      """Molecular orbital. We take gaussian as an example.
    
      A function that takes a real coordinate, and returns the value of
      molecular orbital at this coordinate.
    
      Args:
        r: a 3D coordinate.
      Returns:
        mo: If it is unpolarized, it is a array of shape (N,).
            If it is polarized, it is a array of shape (N, 2).
      """
      # Assume we have 3 molecular orbitals
      return jnp.array([
          jnp.prod(jax.scipy.stats.norm.pdf(r, loc=0, scale=1)),
          jnp.prod(jax.scipy.stats.norm.pdf(r, loc=0.5, scale=1)),
          jnp.prod(jax.scipy.stats.norm.pdf(r, loc=-0.5, scale=1))
      ])
    
    
    rho = lambda r: jnp.sum(mo(r)**2, axis=0)
    mgga_xc_cc06 = jax_xc.mgga_xc_cc06(polarized=False)
    
    # a grid point in 3D
    r = jnp.array([0.1, 0.2, 0.3])
    
    # evaluate the exchange correlation energy per particle at this point
    # corresponding to the 'zk' in libxc
    print(mgga_xc_cc06(rho, r, mo))

    Hybrid Functionals

    Hybrid functionals expose the same API, with extra attributes for the users to access parameters needed outside of libxc/jax_xc (e.g. the fraction of exact exchange).

    import jax
    import jax.numpy as jnp
    import jax_xc
    
    
    def rho(r):
      """Electron number density. We take gaussian as an example.
    
      A function that takes a real coordinate, and returns a scalar
      indicating the number density of electron at coordinate r.
    
      Args:
        r: a 3D coordinate.
      Returns:
        rho: If it is unpolarized, it is a scalar.
            If it is polarized, it is a array of shape (2,).
      """
      return jnp.prod(jax.scipy.stats.norm.pdf(r, loc=0, scale=1))
    
    
    hyb_gga_xc_pbeb0 = jax_xc.hyb_gga_xc_pbeb0(polarized=False)
    
    # a grid point in 3D
    r = jnp.array([0.1, 0.2, 0.3])
    
    # evaluate the exchange correlation energy per particle at this point
    # corresponding to the 'zk' in libxc
    print(hyb_gga_xc_pbeb0(rho, r))
    
    # access to extra attributes
    cam_alpha = hyb_gga_xc_pbep0.cam_alpha  # fraction of full Hartree-Fock exchange

    The complete list of extra attributes can be found below:

    cam_alpha: float
    cam_beta: float
    cam_omega: float
    nlc_b: float
    nlc_C: float

    The meaning for each attribute is the same as libxc:

    • cam_alpha: fraction of full Hartree-Fock exchange, used both for usual hybrids as well as range-separated ones
    • cam_beta: fraction of short-range only(!) exchange in range-separated hybrids
    • cam_omega: range separation constant
    • nlc_b: non-local correlation, b parameter
    • nlc_C: non-local correlation, C parameter

    Experimental

    We support automatic functional derivative!

    import jax
    import jax_xc
    import autofd.operators as o
    from autofd import function
    import jax.numpy as jnp
    from jaxtyping import Array, Float32
    
    @function
    def rho(r: Float32[Array, "3"]) -> Float32[Array, ""]:
      """Electron number density. We take gaussian as an example.
    
      A function that takes a real coordinate, and returns a scalar
      indicating the number density of electron at coordinate r.
    
      Args:
      r: a 3D coordinate.
      Returns:
      rho: If it is unpolarized, it is a scalar.
          If it is polarized, it is a array of shape (2,).
      """
      return jnp.prod(jax.scipy.stats.norm.pdf(r, loc=0, scale=1))
    
    # create a density functional
    gga_x_pbe = jax_xc.experimental.gga_x_pbe
    epsilon_xc = gga_x_pbe(rho)
    
    # a grid point in 3D
    r = jnp.array([0.1, 0.2, 0.3])
    
    # pass rho and r to the functional to compute epsilon_xc (energy density) at r.
    # corresponding to the 'zk' in libxc
    print(f"The function signature of epsilon_xc is {epsilon_xc}")
    
    energy_density = epsilon_xc(r)
    print(f"epsilon_xc(r) = {energy_density}")
    
    vxc = jax.grad(lambda rho: o.integrate(rho * gga_x_pbe(rho)))(rho)
    print(f"The function signature of vxc is {vxc}")
    print(vxc(r))

    Support Functionals

    Please refer to the functionals section in jax_xc‘s documentation for the complete list of supported functionals.

    Numerical Correctness

    We test all the functionals that are auto-generated from maple files against the reference values in libxc. The test is performed by comparing the output of libxc and jax_xc and make sure they are within a certain tolerance, namely atol=2e-10 and rtol=2e-10.

    Performance Benchmark

    We report the performance benchmark of jax_xc against libxc on a 64-core machine with Intel(R) Xeon(R) Silver 4216 CPU @ 2.10GHz.

    We sample the points to evaluate the functionals by varying the number of points from 1 to $10^7$. The benchmark is performed by evaluating the runtime of the functional. Note that the runtime of jax_xc is measured by excluding the time of just-in-time compilation.

    We visualize the mean value (averaged for both polarized and unpolarized) of the runtime of jax_xc and libxc in the following figure. The y-axis is log-scale.

    jax_xc‘s runtime is constantly below libxc‘s for all batch sizes. The speed up is ranging from 3x to 10x, and it is more significant for larger batch sizes.

    We hypothesize that the reason for the speed up is that Jax’s JIT compiler is able to optimize the functionals (e.g. vectorization, parallel execution, instruction fusion, constant folding for floating points, etc.) better than libxc.

    We visualize the distribution of the runtime ratio of jax_xc and libxc in the following figure. The ratio is closer to 0.1 for large batch sizes (~ 10x speed up). The ratio is constantly below 1.0.

    Note that, we exclude one datapoint mgga_x_2d_prhg07 from the runtime ratio visualization because it is an outlier due to Jax’s lack of support oflamberw function and we use tensorflow_probability.substrates.jax.math.lambertw.

    Caveates

    The following functionals from libxc are not available in jax_xc because some functions are not available in jax.

    gga_x_fd_lb94          # Becke-Roussel not having a closed-form expression
    gga_x_fd_revlb94       # Becke-Roussel not having a closed-form expression
    gga_x_gg99             # Becke-Roussel not having a closed-form expression
    gga_x_kgg99            # Becke-Roussel not having a closed-form expression
    hyb_gga_xc_case21      # Becke-Roussel not having a closed-form expression
    hyb_mgga_xc_b94_hyb    # Becke-Roussel not having a closed-form expression
    hyb_mgga_xc_br3p86     # Becke-Roussel not having a closed-form expression
    lda_x_1d_exponential   # Requires explicit 1D integration
    lda_x_1d_soft          # Requires explicit 1D integration
    mgga_c_b94             # Becke-Roussel not having a closed-form expression
    mgga_x_b00             # Becke-Roussel not having a closed-form expression
    mgga_x_bj06            # Becke-Roussel not having a closed-form expression
    mgga_x_br89            # Becke-Roussel not having a closed-form expression
    mgga_x_br89_1          # Becke-Roussel not having a closed-form expression
    mgga_x_mbr             # Becke-Roussel not having a closed-form expression
    mgga_x_mbrxc_bg        # Becke-Roussel not having a closed-form expression
    mgga_x_mbrxh_bg        # Becke-Roussel not having a closed-form expression
    mgga_x_mggac           # Becke-Roussel not having a closed-form expression
    mgga_x_rpp09           # Becke-Roussel not having a closed-form expression
    mgga_x_tb09            # Becke-Roussel not having a closed-form expression
    gga_x_wpbeh            # jit too long for E1_scaled
    gga_c_ft97             # jit too long for E1_scaled
    lda_xc_tih             # vxc functional
    gga_c_pbe_jrgx         # vxc functional
    gga_x_lb               # vxc functional

    Building from Source Code

    Modify the .env.example to fill in your envrionment variables, then rename it to .env. Then run source .env to load them into your shell.

    • OUTPUT_USER_ROOT: The path to the bazel cache. This is where the bazel cache will be stored. This is useful if you are building on a shared filesystem.
    • MAPLE_PATH: The path to the maple binary.
    • TMP_INSTALL_PATH: The path to a temporary directory where the wheel will be installed. This is useful if you are building on a shared filesystem.

    Make sure you have bazel and maple installed. Your python envrionment has installed the dependencies in requirements.txt.

    How to build python wheel.

    bazel --output_user_root=$OUTPUT_USER_ROOT build --action_env=PATH=$PATH:$MAPLE_PATH @jax_xc_repo//:jax_xc_wheel

    Once the build finished, the python wheel could be found under bazel-bin/external/jax_xc_repo. For example, the name for version 0.0.7 is jax_xc-0.0.7-cp310-cp310-manylinux_2_17_x86_64.whl.

    Install the python wheel. If needed, specify the install path by

    pip install {{wheel_name}} --target $TMP_INSTALL_PATH

    Running Test

    The test could be run without the command above that builds wheel from source, though it might take longer time to build all the components needed for the test. To run all the test:

    bazel --output_user_root=$OUTPUT_USER_ROOT test --action_env=PATH=$PATH:$MAPLE_PATH //tests/...

    To run a specific test, for example test_impl:

    bazel --output_user_root=$OUTPUT_USER_ROOT test --action_env=PATH=$PATH:$MAPLE_PATH //tests:test_impl

    The test output could be found in bazel-testlogs/tests/test_impl/test.log for the tests:test_impl and similar to the others. If you prefer output in command line, add --test_output=all to the above command.

    License

    Aligned with libxc, jax_xc is licensed under the Mozilla Public License 2.0. See LICENSE for the full license text.

    Visit original content creator repository https://github.com/sail-sg/jax_xc
  • wutsi-blog-web

    WebApp for the wutsi blog platform

    Getting Started

    Pre-requisites

    • JDK 1.8
    • MySQL 5.6+
    • Maven 3.6+
    • Google Chrome
    • Setup Maven
      • Setup a token to get access to Github packages
      • Register the repositories in ~/.m2/settings.xml
    settings> ... <servers> ... <!-- Configure connectivity to the Github repositories --> <server> <id>github-wutsi-blog-client</id> <username>YOUR_GITHIB_USERNAME</username> <password>YOUR_GITHIB_TOKEN</password> </server> <server> <id>github-wutsi-core</id> <username>YOUR_GITHIB_USERNAME</username> <password>YOUR_GITHIB_TOKEN</password> </server> <server> <id>github-wutsi-core-aws</id> <username>YOUR_GITHIB_USERNAME</username> <password>YOUR_GITHIB_TOKEN</password> </server> </servers> </settings>

    Installation

    • Download the code and build
    $ git clone git@github.com:wutsi/wutsi-blog-web.git
    $ cd wutsi-blog-web
    $ mvn clean install
    

    Run test

    • Download your chromedriver version there
    • Move this chromedriver in application root
    • Run your tests
    mvn clean install -Dheadless=true
    

    Launch the application

    $ java -jar target/wutsi-blog-web.jar
    
    • Navigate to http://localhost:8081

    How to

    How to format code

    mvn antrun:run@ktlint
    

    How to check code formatting error

    mvn antrun:run@ktlint-format
    

    How to generate coverage report

    mvn jacoco:report
    
    Visit original content creator repository https://github.com/WutsiTeam/wutsi-blog-web
  • Analysis-of-WorldCup-2022

    An-Analysis-of-World-Cup-2022-using-R

    Project Overview

    This project provide an overall exploratory analysis of the World Cup 2022. Based on historical data, I will try to unfold the following insights:

    1. How Qatar, as the WC 2022 host country, perfomed compared to other host country in the past?
    2. Countries that overachieved and underperformed in WC2022 based on historical dominance and recent form
    3. Argentina path to glory compared to France’s in WC2018

    Tools implemented: the whole analysis will be conducted in R, particularly the tidyverse package including forcats, ggplot2, lubridate, etc. and some other smaller analysis packages. The environment is RStudio.

    Datasets:

    The following datasets is being used:

    1. World Cup Events:

    • world_cups.csv: Information World Cup Events since 1935 and countries in the top 4 from Maven Analytics

    2. World Cup Matches:

    • 2022_world_cup_matches.csv: Information about world cup 2022 matches from Maven Analytics
    • world_cup_ matches.csv: Information and results of all world cup matches before 2022 from Maven Analytics
    • Fifa_world_cup_matches.csv: world cup 2022 match result and stats (note: this data will be used to join with 2022_world_cup_matches to retrieve all the match results for WC 2022)
    1. International Matches
    • International_Matches.csv: Information about International Matches before WC 2022
    1. World Cup Groups
    • 2022_world_cup_groups.csv: WC 2022 participants along with their group and final standing

    Directory

    The directory contains the following files and directories:

    • README.MD: Overview and Summary of the project
    • R_Script.R: R Code File
    • R_Script.md: Rendered R Script File to read in Github
    • ‘R_Script_files`
      • figure-gfm: containing graphs and charts of the whole projects
    • data:
      • 2022_world_cup_groups.csv
      • 2022_world_cup_matches.csv
      • B42022_world_cup_matches.csv
      • Fifa_world_cup_matches.csv
      • international_matches.csv
      • world_cups.csv

    Visit original content creator repository
    https://github.com/hoangp27/Analysis-of-WorldCup-2022

  • ripplet.js

    ripplet.js

    BundlePhobia Types: included License: WTFPL

    Fully controllable vanilla-js material design ripple effect generator.
    This can be used with any JavaScript framework and/or any CSS framework.

    Demo

    Installation

    $ npm i ripplet.js
    import ripplet from 'ripplet.js';
    
    element.addEventListener('pointerdown', ripplet);

    CDN (jsDelivr)

    <script src="https://cdn.jsdelivr.net/npm/ripplet.js@1.1.0"></script>
    <button onpointerdown="ripplet(arguments[0])">Click me!</button>

    Download directly

    Download ripplet.min.js

    API

    ripplet(targetSuchAsPointerEvent, options?) => HTMLElement

    Generate a ripple effect.

    Parameters

    • targetSuchAsPointerEvent: Object (required) (in most cases, pass the received PointerEvent object)
    Property name Description
    currentTarget Target element
    clientX Client x-coordinate of center of ripplet
    clientY Client y-coordinate of center of ripplet
    • options: Object (optional)
    Property name Default Description
    className “” Class name to be set for the ripplet element (not for this library to use, but for user to style that element)
    color “currentColor” Ripplet color that can be interpreted by browsers. Specify null if the color or image of the ripple effect is based on the CSS className above.
    If the special value "currentColor" is specified, the text color of the target element (getComputedStyle(currentTarget).color) is used.
    opacity 0.1 Ripplet opacity between 0 and 1.
    spreadingDuration “.4s” As its name suggests.
    spreadingDelay “0s” As its name suggests.
    spreadingTimingFunction “linear” As its name suggests. See https://developer.mozilla.org/docs/Web/CSS/transition-timing-function
    clearing true Whether or not to clear automatically. If false is specified, the ripple effect should be cleared using ripplet.clear(currentTarget)
    clearingDuration “1s” As its name suggests.
    clearingDelay “0s” As its name suggests.
    clearingTimingFunction “ease-in-out” As its name suggests. See https://developer.mozilla.org/docs/Web/CSS/transition-timing-function
    centered false Whether to force the origin centered (and ignore clientX and clientY).
    appendTo “auto” "auto" | "target" | "parent" | CSS selector string like "body". Specify the element to which the ripple effect element will be appended. If "auto" is specified, it will be the target or its closest ancestor that is not an instance of HTMLInputElement, HTMLSelectElement, HTMLTextAreaElement, HTMLImageElement, HTMLHRElement or SVGElement.

    Return value

    Generated element.

    ripplet.clear(currentTarget?, generatedElement?) => void

    Fade out and remove the ripplet. Use only when the option clearing is false.

    Parameters

    • currentTarget: Element (optional)

    The target element that was passed to ripplet(). If this parameter is not passed, all the ripplets will be cleared.

    • generatedElement: Element (optional)

    The generated element that was returned by ripplet(). If this parameter is not passed, all the ripplets (of the currentTarget above) will be cleared.

    Example

    <button
      onpointerdown="ripplet(arguments[0], { clearing: false })"
      onpointerup="ripplet.clear(this)"
      onpointerleave="ripplet.clear(this)"
    >Keep pressing!</button>

    ripplet.defaultOptions

    You can change the default ripplet options for your app.
    For example:

    import ripplet from 'ripplet';
    
    ripplet.defaultOptions.color = 'rgb(64, 128, 255)';

    Declarative Edition

    If you don’t need detailed control, you can use declarative edition that captures pointerdown events.
    Load "ripplet-declarative.js" and add data-ripplet attribute to html elements with/without options.
    Elements dynamically appended also have the ripple effect if data-ripplet attribute is available.

    In declarative edition, the ripple effect remains until the pointerup or pointerleave event occurs.

    Example Usage

    <script src="https://cdn.jsdelivr.net/npm/ripplet.js@1.1.0/umd/ripplet-declarative.min.js"></script> <!-- <script>ripplet.defaultOptions.color = 'rgb(0, 255, 0)';</script> --> <button data-ripplet>Default</button> <button data-ripplet="color: rgb(64, 192, 255); spreading-duration: 2s; clearing-delay: 1.8s;">Sky Blue Slow</button>

    or

    import 'ripplet.js/es/ripplet-declarative';
    // require(ripplet.js/umd/ripplet-declarative.min');
    
    // import { defaultOptions } from 'ripplet.js/es/ripplet-declarative';
    // defaultOptions.color = 'rgb(255, 128, 0)';

    or

    Download ripplet-declarative.min.js

    Tips

    I recommend applying following styles to the ripple target elements:

    1. Erase tap highlight effect for mobile devices
    2. Disable tap-to-hover behavior and double-tap-to-zoom behavior for mobile devices
    /* Example for the declarative edition */
    [data-ripplet] {
      -webkit-tap-highlight-color: transparent; /* 1 */
      touch-action: manipulation; /* 2 */
    }

    License

    WTFPL

    Visit original content creator repository https://github.com/luncheon/ripplet.js
  • ripplet.js

    ripplet.js

    BundlePhobia Types: included License: WTFPL

    Fully controllable vanilla-js material design ripple effect generator.
    This can be used with any JavaScript framework and/or any CSS framework.

    Demo

    Installation

    $ npm i ripplet.js
    import ripplet from 'ripplet.js';
    
    element.addEventListener('pointerdown', ripplet);

    CDN (jsDelivr)

    <script src="https://cdn.jsdelivr.net/npm/ripplet.js@1.1.0"></script>
    <button onpointerdown="ripplet(arguments[0])">Click me!</button>

    Download directly

    Download ripplet.min.js

    API

    ripplet(targetSuchAsPointerEvent, options?) => HTMLElement

    Generate a ripple effect.

    Parameters

    • targetSuchAsPointerEvent: Object (required) (in most cases, pass the received PointerEvent object)
    Property name Description
    currentTarget Target element
    clientX Client x-coordinate of center of ripplet
    clientY Client y-coordinate of center of ripplet
    • options: Object (optional)
    Property name Default Description
    className “” Class name to be set for the ripplet element (not for this library to use, but for user to style that element)
    color “currentColor” Ripplet color that can be interpreted by browsers. Specify null if the color or image of the ripple effect is based on the CSS className above.
    If the special value "currentColor" is specified, the text color of the target element (getComputedStyle(currentTarget).color) is used.
    opacity 0.1 Ripplet opacity between 0 and 1.
    spreadingDuration “.4s” As its name suggests.
    spreadingDelay “0s” As its name suggests.
    spreadingTimingFunction “linear” As its name suggests. See https://developer.mozilla.org/docs/Web/CSS/transition-timing-function
    clearing true Whether or not to clear automatically. If false is specified, the ripple effect should be cleared using ripplet.clear(currentTarget)
    clearingDuration “1s” As its name suggests.
    clearingDelay “0s” As its name suggests.
    clearingTimingFunction “ease-in-out” As its name suggests. See https://developer.mozilla.org/docs/Web/CSS/transition-timing-function
    centered false Whether to force the origin centered (and ignore clientX and clientY).
    appendTo “auto” "auto" | "target" | "parent" | CSS selector string like "body". Specify the element to which the ripple effect element will be appended. If "auto" is specified, it will be the target or its closest ancestor that is not an instance of HTMLInputElement, HTMLSelectElement, HTMLTextAreaElement, HTMLImageElement, HTMLHRElement or SVGElement.

    Return value

    Generated element.

    ripplet.clear(currentTarget?, generatedElement?) => void

    Fade out and remove the ripplet. Use only when the option clearing is false.

    Parameters

    • currentTarget: Element (optional)

    The target element that was passed to ripplet(). If this parameter is not passed, all the ripplets will be cleared.

    • generatedElement: Element (optional)

    The generated element that was returned by ripplet(). If this parameter is not passed, all the ripplets (of the currentTarget above) will be cleared.

    Example

    <button
      onpointerdown="ripplet(arguments[0], { clearing: false })"
      onpointerup="ripplet.clear(this)"
      onpointerleave="ripplet.clear(this)"
    >Keep pressing!</button>

    ripplet.defaultOptions

    You can change the default ripplet options for your app.
    For example:

    import ripplet from 'ripplet';
    
    ripplet.defaultOptions.color = 'rgb(64, 128, 255)';

    Declarative Edition

    If you don’t need detailed control, you can use declarative edition that captures pointerdown events.
    Load "ripplet-declarative.js" and add data-ripplet attribute to html elements with/without options.
    Elements dynamically appended also have the ripple effect if data-ripplet attribute is available.

    In declarative edition, the ripple effect remains until the pointerup or pointerleave event occurs.

    Example Usage

    <script src="https://cdn.jsdelivr.net/npm/ripplet.js@1.1.0/umd/ripplet-declarative.min.js"></script> <!-- <script>ripplet.defaultOptions.color = 'rgb(0, 255, 0)';</script> --> <button data-ripplet>Default</button> <button data-ripplet="color: rgb(64, 192, 255); spreading-duration: 2s; clearing-delay: 1.8s;">Sky Blue Slow</button>

    or

    import 'ripplet.js/es/ripplet-declarative';
    // require(ripplet.js/umd/ripplet-declarative.min');
    
    // import { defaultOptions } from 'ripplet.js/es/ripplet-declarative';
    // defaultOptions.color = 'rgb(255, 128, 0)';

    or

    Download ripplet-declarative.min.js

    Tips

    I recommend applying following styles to the ripple target elements:

    1. Erase tap highlight effect for mobile devices
    2. Disable tap-to-hover behavior and double-tap-to-zoom behavior for mobile devices
    /* Example for the declarative edition */
    [data-ripplet] {
      -webkit-tap-highlight-color: transparent; /* 1 */
      touch-action: manipulation; /* 2 */
    }

    License

    WTFPL

    Visit original content creator repository https://github.com/luncheon/ripplet.js
  • weibo-analysis-and-visualization

    weibo GitHub license stars forks python

    微博文本分析和可视化

    0. 数据来源和结构

    新浪微博,爬虫链接:

    https://github.com/HUANGZHIHAO1994/weibospider-keyword

    微博内容数据结构(mongo数据库导出的json文档)

    content_example:
    [
    {'_id': '1177737142_H4PSVeZWD', 'keyword': 'A股', 'crawl_time': '2019-06-01 20:31:13', 'weibo_url': 'https://weibo.com/1177737142/H4PSVeZWD', 'user_id': '1177737142', 'created_at': '2018-11-29 03:02:30', 'tool': 'Android', 'like_num': {'$numberInt': '0'}, 'repost_num': {'$numberInt': '0'}, 'comment_num': {'$numberInt': '0'}, 'image_url': 'http://wx4.sinaimg.cn/wap180/4632d7b6ly1fxod61wktyj20u00m8ahf.jpg', 'content': '#a股观点# 鲍威尔主席或是因为被特朗普总统点名批评后萌生悔改之意,今晚一番讲话被市场解读为美联储或暂停加息步伐。美元指数应声下挫,美股及金属贵金属价格大幅上扬,A50表现也并不逊色太多。对明天A股或有积极影响,反弹或能得以延续。 [组图共2张]'},...
    ]
    

    微博评论数据结构(mongo数据库导出的json文档)

    comment_example:
    [
    {'_id': 'C_4322161898716112', 'crawl_time': '2019-06-01 20:35:36', 'weibo_url': 'https://weibo.com/1896820725/H9inNf22b', 'comment_user_id': '6044625121', 'content': '没问题,', 'like_num': {'$numberInt': '0'}, 'created_at': '2018-12-28 11:19:21'},...
    ]
    

    1. 数据预处理

    1. prepro.py、pre_graph.py、senti_pre.py

      为了应对各种分析需求,需要数据预处理,具体所需数据文件类型和输出的结果数据结构见这三个py文件

      PS:

      prepro.py 运行时根据需要修改123、143、166行三处代码

      pre_graph.py 运行时根据需要修改127、140行两处代码

      senti_pre.py 运行时根据需要修改第119行代码

    2. zh_wiki.py、langconv.py

      这两个py文件是用于繁体转简体的无需修改

    2. 数据分析和可视化

    1. 词云:wc.py(需要跑完prepro.py)

      根据需要修改3、19、26行代码

    2. 热度地图: map.py(需要跑完prepro.py)

      根据需要修改第8行代码

    3. 转发、评论、点赞时间序列: line.py(需要跑完senti_pre.py 和 senti_analy.py)

    4. 微博评论关系图: graph.py(需要跑完pre_graph.py)

      参考

    5. 文本聚类: cluster_tfidf.pycluster_w2v.py(需要跑完prepro.py)

    6. LDA主题模型分析: LDA.py(需要跑完senti_pre.py)tree.py(需要跑完senti_analy.py)

    7. 情感分析(词典): senti_analy.py(需要跑完senti_pre.py)3Dbar.py(需要跑完senti_analy.py)pie.py(需要跑完senti_analy.py)

    8. 情感分析(W2V+LSTM):Sentiment-Analysis-master文档中的senti_lstm.py(需要跑完senti_pre.py)

      看情况修改250行代码

      有些文档太大,放在百度网盘链接中:

      链接:https://pan.baidu.com/s/1l447d3d6OSd_yAlsF7b_mA 提取码:og9t

    9. 文本相似度分析:similar.py(仅供参考)

    10. 其他可供参考: senti_analy_refer.py、Sentiment_lstm.py

    11. 有关Senti_Keyword_total_id.csv:

      下载8.百度网盘中Senti_Keyword_total_id.csv即可,以下是解释: 该文件几乎和Senti_Keyword_total.csv相同,只是多了一列weibo_id(此处不再给出生成Senti_Keyword_total_id.csv的代码,直接给生成的文档, 生成Senti_Keyword_total_id.csv可改写senti_analy.py,增加一列weibo_id), 8中的百度网盘(有Senti_Keyword_total_id.csv和Senti_Keyword_total.csv,还有全部comment和全部content), 由于lines.py等需要全部关键词,因此需要用senti_analy.py直接跑全部comment.json和content.json生成Senti_Keyword_total.csv(直接从网盘下来Senti_Keyword_total_id.csv再跑lines.py,3Dbar.py,pie.py即可)

    Visit original content creator repository https://github.com/HUANGZHIHAO1994/weibo-analysis-and-visualization
  • weibo-analysis-and-visualization

    weibo GitHub license stars forks python

    微博文本分析和可视化

    0. 数据来源和结构

    新浪微博,爬虫链接:

    https://github.com/HUANGZHIHAO1994/weibospider-keyword

    微博内容数据结构(mongo数据库导出的json文档)

    content_example:
    [
    {'_id': '1177737142_H4PSVeZWD', 'keyword': 'A股', 'crawl_time': '2019-06-01 20:31:13', 'weibo_url': 'https://weibo.com/1177737142/H4PSVeZWD', 'user_id': '1177737142', 'created_at': '2018-11-29 03:02:30', 'tool': 'Android', 'like_num': {'$numberInt': '0'}, 'repost_num': {'$numberInt': '0'}, 'comment_num': {'$numberInt': '0'}, 'image_url': 'http://wx4.sinaimg.cn/wap180/4632d7b6ly1fxod61wktyj20u00m8ahf.jpg', 'content': '#a股观点# 鲍威尔主席或是因为被特朗普总统点名批评后萌生悔改之意,今晚一番讲话被市场解读为美联储或暂停加息步伐。美元指数应声下挫,美股及金属贵金属价格大幅上扬,A50表现也并不逊色太多。对明天A股或有积极影响,反弹或能得以延续。 [组图共2张]'},...
    ]
    

    微博评论数据结构(mongo数据库导出的json文档)

    comment_example:
    [
    {'_id': 'C_4322161898716112', 'crawl_time': '2019-06-01 20:35:36', 'weibo_url': 'https://weibo.com/1896820725/H9inNf22b', 'comment_user_id': '6044625121', 'content': '没问题,', 'like_num': {'$numberInt': '0'}, 'created_at': '2018-12-28 11:19:21'},...
    ]
    

    1. 数据预处理

    1. prepro.py、pre_graph.py、senti_pre.py

      为了应对各种分析需求,需要数据预处理,具体所需数据文件类型和输出的结果数据结构见这三个py文件

      PS:

      prepro.py 运行时根据需要修改123、143、166行三处代码

      pre_graph.py 运行时根据需要修改127、140行两处代码

      senti_pre.py 运行时根据需要修改第119行代码

    2. zh_wiki.py、langconv.py

      这两个py文件是用于繁体转简体的无需修改

    2. 数据分析和可视化

    1. 词云:wc.py(需要跑完prepro.py)

      根据需要修改3、19、26行代码

    2. 热度地图: map.py(需要跑完prepro.py)

      根据需要修改第8行代码

    3. 转发、评论、点赞时间序列: line.py(需要跑完senti_pre.py 和 senti_analy.py)

    4. 微博评论关系图: graph.py(需要跑完pre_graph.py)

      参考

    5. 文本聚类: cluster_tfidf.pycluster_w2v.py(需要跑完prepro.py)

    6. LDA主题模型分析: LDA.py(需要跑完senti_pre.py)tree.py(需要跑完senti_analy.py)

    7. 情感分析(词典): senti_analy.py(需要跑完senti_pre.py)3Dbar.py(需要跑完senti_analy.py)pie.py(需要跑完senti_analy.py)

    8. 情感分析(W2V+LSTM):Sentiment-Analysis-master文档中的senti_lstm.py(需要跑完senti_pre.py)

      看情况修改250行代码

      有些文档太大,放在百度网盘链接中:

      链接:https://pan.baidu.com/s/1l447d3d6OSd_yAlsF7b_mA 提取码:og9t

    9. 文本相似度分析:similar.py(仅供参考)

    10. 其他可供参考: senti_analy_refer.py、Sentiment_lstm.py

    11. 有关Senti_Keyword_total_id.csv:

      下载8.百度网盘中Senti_Keyword_total_id.csv即可,以下是解释: 该文件几乎和Senti_Keyword_total.csv相同,只是多了一列weibo_id(此处不再给出生成Senti_Keyword_total_id.csv的代码,直接给生成的文档, 生成Senti_Keyword_total_id.csv可改写senti_analy.py,增加一列weibo_id), 8中的百度网盘(有Senti_Keyword_total_id.csv和Senti_Keyword_total.csv,还有全部comment和全部content), 由于lines.py等需要全部关键词,因此需要用senti_analy.py直接跑全部comment.json和content.json生成Senti_Keyword_total.csv(直接从网盘下来Senti_Keyword_total_id.csv再跑lines.py,3Dbar.py,pie.py即可)

    Visit original content creator repository https://github.com/HUANGZHIHAO1994/weibo-analysis-and-visualization
  • customized-linkedin-to-jsonresume

    Customized LinkedIn Profile to JSON Resume Browser Tool

    🖼️ This is a slightly tweaked version of the LinkedIn to JSON Resume Chrome Extension. That project is outdated because it isn’t using the latest version of JSON Schema. Furthermore, I have customized that schema myself, so I have to base this Chrome extension off of my own schema.

    Build

    1. npm install
    2. Make a code change and then run npm run build-browserext, which will generate files in ./build-browserext.
    3. npm run package-browserext will side-load the build as a ZIP in webstore-zips directory.
    4. If you want to do something else besides side-loading, read the original README.

    Usage

    For local use:

    1. npm run package-browserext will side-load the build as a ZIP in webstore-zips directory.
    2. In Chrome, go to chrome://extensions then drag-n-drop the ZIP onto the browser. Note that developer mode must be turned on.
    3. Go to your LinkedIn profile, i.e. www.linkedin.com/in/anthonydellavecchia and click on LinkedIn Profile to JSON button.
    4. After a second or two, JSON will be generated. Copy this, as it is a raw/pre-transformation version.
    5. Note that in the Chrome Extension, you can select either the custom version of the JSON schema that I created, or the last stable build from v0.0.16 (mine is based on v1.0.0).

    Design

    • browser-ext/popup.html holds the HTML for the Chrome Extension.
    • jsonresume.scheama.latest.ts is the latest schema from JSON Resume Schema (v1.0.0).
    • jsonresume.scheama.stable.ts is the stable but very outdated schema from JSON Resume Schema (v0.0.16).
    • src/main.js holds most of the JavaScript to get and transform data from LinkedIn.
    • src/templates.js holds the templates for the schema.

    Click to expand README.md of the source repository!

    An extremely easy-to-use browser extension for exporting your full LinkedIn Profile to a JSON Resume file or string.

    Demo GIF

    Usage / Installation Options:

    There are (or were) a few different options for how to use this:

    • Fast and simple: Chrome Extension – Get it here
      • Feel free to install, use, and then immediately uninstall if you just need a single export
      • No data is collected
    • [Deprecated] (at least for now): Bookmarklet
      • This was originally how this tool worked, but had to be retired as a valid method when LinkedIn added a stricter CSP that prevented it from working
      • Code to generate the bookmarklet is still in this repo if LI ever loosens the CSP

    Schema Versions

    This tool supports multiple version of the JSON Resume Schema specification for export, which you can easily swap between in the dropdown selector! ✨

    “Which schema version should I use?”

    If you are unsure, you should probably just stick with “stable”, which is the default. It should have the most widespread support across the largest number of platforms.

    Support for Multilingual Profiles

    LinkedIn has a unique feature that allows you to create different versions of your profile for different languages, rather than relying on limited translation of certain fields.

    For example, if you are bilingual in both English and German, you could create one version of your profile for each language, and then viewers would automatically see the correct one depending on where they live and their language settings.

    I’ve implemented support (starting with v1.0.0) for multilingual profile export through a dropdown selector:

    Export Language Selector

    The dropdown should automatically get populated with the languages that the profile you are currently viewing supports, in addition to your own preferred viewing language in the #1 spot. You should be able to switch between languages in the dropdown and click the export button to get a JSON Resume export with your selected language.

    Note: LinkedIn offers language choices through a Locale string, which is a combination of country (ISO-3166) and language (ISO-639). I do not make decisions as to what languages are supported.

    This feature is the part of this extension most likely to break in the future; LI has some serious quirks around multilingual profiles – see my notes for details.

    Export Options

    There are several main buttons in the browser extension, with different effects. You can hover over each button to see the alt text describing what they do, or read below:

    • LinkedIn Profile to JSON: Converts the profile to the JSON Resume format, and then displays it in a popup modal for easy copying and pasting
    • Download JSON Resume Export: Same as above, but prompts you to download the result as an actual .json file.
    • Download vCard File: Export and download the profile as a Virtual Contact File (.vcf) (aka vCard)
      • There are some caveats with this format; see below

    vCard Limitations and Caveats

    • Partial birthdate (aka BDAY) values (e.g. where the profile has a month and day, but has not opted to share their birth year), are only supported in v4 (RFC-6350) and above. This extension currently only supports v3, so in these situations the tool will simply omit the BDAY field from the export
      • See #32 for details
    • The LinkedIn display photo (included in vCard) served by LI is a temporary URL, with a fixed expiration date set by LinkedIn. From observations, this is often set months into the future, but could still be problematic for address book clients that don’t cache images. To work around this, I’m converting it to a base64 string; this should work with most vCard clients, but also increases the vCard file size considerably.

    Chrome Side-loading Instructions

    Instead of installing from the Chrome Webstore, you might might want to “side-load” a ZIP build for either local development, or to try out a new release that has not yet made it through the Chrome review process. Here are the instructions for doing so:

    1. Find the ZIP you want to load
      • If you want to side-load the latest version, you can download a ZIP from the releases tab
      • If you want to side-load a local build, use npm run package-browserext to create a ZIP
    2. Go to Chrome’s extension setting page (chrome://extensions)
    3. Turn on developer mode (upper right toggle switch)
    4. Drag the downloaded zip to the browser to let it install
    5. Test it out, then uninstall

    You can also unpack the ZIP and load it as “unpacked”.

    Troubleshooting

    When in doubt, refresh the profile page before using this tool.

    Troubleshooting – Debug Log

    If I’m trying to assist you in solving an issue with this tool, I might have you share some debug info. Currently, the easiest way to do this is to use the Chrome developer’s console:

    1. Append ?li2jr_debug=true to the end of the URL of the profile you are on
    2. Open Chrome dev tools, and specifically, the console (instructions)
    3. Run the extension (try to export the profile), and then look for red messages that show up in the console (these are errors, as opposed to warnings or info logs).
      • You can filter to just error messages, in the filter dropdown above the console.

    Updates:

    Update History (Click to Show / Hide)
    Date Release Notes
    2/27/2021 2.1.2 Fix: Multiple issues around work history / experience; missing titles, ordering, etc. Overhauled approach to extracting work entries.
    12/19/2020 2.1.1 Fix: Ordering of work history with new API endpoint (#38)
    12/7/2020 2.1.0 Fix: Issue with multilingual profile, when exporting your own profile with a different locale than your profile’s default. (#37)
    11/12/2020 2.0.0 Support for multiple schema versions ✨ (#34)
    11/8/2020 1.5.1 Fix: Omit partial BDAY export in vCard (#32)
    10/22/2020 1.5.0 Fix: Incorrect birthday month in exported vCards (off by one)
    Fix: Better pattern for extracting profile ID from URL, fixes extracting from virtual sub-pages of profile (e.g. /detail/contact-info), or with query or hash strings at the end.
    7/7/2020 1.4.2 Fix: For work positions, if fetched via profilePositionGroups, LI ordering (the way it looks on your profile) was not being preserved.
    7/31/2020 1.4.1 Fix: In some cases, wrong profileUrnId was extracted from current profile, which led to work history API call being ran against a different profile (e.g. from “recommended section”, or something like that).
    7/21/2020 1.4.0 Fix: For vCard exports, Previous profile was getting grabbed after SPA navigation between profiles.
    7/6/2020 1.3.0 Fix: Incomplete work position entries for some users; LI was limiting the amount of pre-fetched data. Had to implement request paging to fix.
    Also refactored a lot of code, improved result caching, and other tweaks.
    6/18/2020 1.2.0 Fix / Improve VCard export feature.
    6/5/2020 1.1.0 New feature: vCard export, which you can import into Outlook / Google Contacts / etc.
    5/31/2020 1.0.0 Brought output up to par with “spec”, integrated schemas as TS, added support for multilingual profiles, overhauled JSDoc types.
    Definitely a breaking change, since the output has changed to mirror schema more closely (biggest change is website in several spots has become url)
    5/9/2020 0.0.9 Fixed “references”, added certificates (behind setting), and formatting tweaks
    4/4/2020 0.0.8 Added version string display to popup
    4/4/2020 0.0.7 Fixed and improved contact info collection (phone, Twitter, and email). Miscellaneous other tweaks.
    10/22/2019 0.0.6 Updated recommendation querySelector after LI changed DOM. Thanks again, @ lucbpz.
    10/19/2019 0.0.5 Updated LI date parser to produce date string compliant with JSONResume Schema (padded). Thanks @ lucbpz.
    9/12/2019 0.0.4 Updated Chrome webstore stuff to avoid LI IP usage (Google took down extension page due to complaint). Updated actual scraper code to grab full list of skills vs just highlighted.
    8/3/2019 NA Rewrote this tool as a browser extension instead of a bookmarklet to get around the CSP issue. Seems to work great!
    7/22/2019 NA ALERT: This bookmarklet is currently broken, thanks to LinkedIn adding a new restrictive CSP (Content Security Policy) header to the site. I’ve opened an issue to discuss this, and both short-term (requires using the console) and long-term (browser extension) solutions.
    6/21/2019 0.0.3 I saw the bookmarklet was broken depending on how you came to the profile page, so I refactored a bunch of code and found a much better way to pull the data. Should be much more reliable!

    What is JSON Resume?

    “JSON Resume” is an open-source standard / schema, currently gaining in adoption, that standardizes the content of a resume into a shared underlying structure that others can use in automated resume formatters, parsers, etc. Read more about it here, or on GitHub.

    What is this tool?

    I made this because I wanted a way to quickly generate a JSON Resume export from my LinkedIn profile, and got frustrated with how locked down the LinkedIn APIs are and how slow it is to request your data export (up to 72 hours). “Install” the tool to your browser, then click to run it while looking at a LinkedIn profile (preferably your own), and my code will grab the various pieces of information off the page and then show a popup with the full JSON resume export that you can copy and paste to wherever you would like.


    Development

    With the rewrite to a browser extension, I actually configured the build scripts to be able to still create a bookmarklet from the same codebase, in case the bookmarklet ever becomes a viable option again.

    Building the browser extension

    npm run build-browserext will transpile and copy all the right files to ./build-browserext, which you can then side-load into your browser. If you want to produce a single ZIP archive for the extension, npm run package-browserext will do that.

    Use build-browserext-debug for a source-map debug version. To get more console output, append li2jr_debug=true to the query string of the LI profile you are using the tool with.

    Building the bookmarklet version

    Currently, the build process looks like this:

    • src/main.js -> (webpack + babel) -> build/main.js -> mrcoles/bookmarklet -> build/bookmarklet_export.js -> build/install-page.html
      • The bookmark can then be dragged to your bookmarks from the final build/install-page.html

    All of the above should happen automatically when you do npm run build-bookmarklet.

    If this ever garners enough interest and needs to be updated, I will probably want to re-write it with TypeScript to make it more maintainable.

    LinkedIn Documentation

    For understanding some peculiarities of the LI API, see LinkedIn-Notes.md.

    Debugging

    Debugging the extension is a little cumbersome, because of the way Chrome sandboxes extension scripts and how code has to be injected. An alternative to setting breakpoints in the extension code itself, is to copy the output of /build/main.js and run it via the console.

    li2jr = new LinkedinToResumeJson(true, true);
    li2jr.parseAndShowOutput();

    Even if you have the repo inside of a local static server, you can’t inject it via a script tag or fetch & eval, due to LI’s restrictive CSP.

    If you do want to find the actual injected code of the extension in Chrome dev tools, you should be able to find it under Sources -> Content Scripts -> top -> JSON Resume Exporter -> {main.js}

    Debugging Snippets

    Helpful snippets (subject to change; these rely heavily on internals):

    // Get main profileDB (after running extension)
    var profileRes = await liToJrInstance.getParsedProfile(true);
    var profileDb = await liToJrInstance.internals.buildDbFromLiSchema(profileRes.liResponse);

    DISCLAIMER:

    This tool is not affiliated with LinkedIn in any manner. Intended use is to export your own profile data, and you, as the user, are responsible for using it within the terms and services set out by LinkedIn. I am not responsible for any misuse, or repercussions of said misuse.

    Attribution:

    Icon for browser extension:

    Visit original content creator repository https://github.com/anthonyjdella/customized-linkedin-to-jsonresume
  • amigen7

    Introduction

    The scripts in this project are designed to ease the creation of LVM-enabled Enterprise Linux AMIs for use in AWS envrionments. It has been successfully tested with CentOS 7.x, Scientific Linux 7.x and Red Hat Enterprise Linux 7.x. It should work with other EL7-derived operating systems.

    Note: The scripts can also be used to generate bootstrap and/or recovery AMIs: non-LVMed AMIs intended to help generate the LVM-enabled AMIs or recover LVM-enabled instances. However, this functionality is only lightly tested. It is known to produce CentOS 7.x AMIs suitable for bootstrapping. It is also known to not produce RHEL 7.x AMIs suitable for bootstrapping. As this is not the scripts’ primary use-case, documentation for such is not included (though it should be easy enough for an experienced EL7 adminstrator to figure out from reading the scripts’ contents).

    Table of Contents

    Travis Build Status

    Visit original content creator repository https://github.com/plus3it/amigen7