Author: wbjuba8jii0b

  • MSc_Data_Science-Master_Thesis

    Apply Machine Learning in the Company to Predict the Quality of Sales Leads

    By Jordi Solé Casaramona.

    This repository is part of the thesis of the Master’s in Data Science by the University of Barcelona 2019/2020.

    This work has been done with the collaboration of the EMEA 3D sales team in HP. The dataset in which the project is based on is currently more than 40,500 leads from the fiscal year 2016 quarter 1 to the fiscal year 2020 quarter 4. On average, every week 257 new leads are entered into the system.

    The main objective of this project is to develop a data science pipeline, capable of predicting, for every lead, the quality of it. Meaning quality, the probability in which the lead is going to become a possible sell, and to advance to the next sale stages. By doing this, we want to achieve a transition from decisions based on intuition from the salesman, to more data-driven decision-making with the use of a score, from 0 to 1, that will be an indication of the quality of the lead.

    The pipeline developed is widely explained in the thesis file found in this repository. But the main structure of the pipeline is exposed below:

    Image of pipeline

    The pipeline consist of the following processes:

    • Joining: First joining all the sources in one table, this is done in a Z8 server inside HP.
    • Scraping: Web scraping techniques were used to enhance the information received from the company CRM.
    • Preprocessing: Some of the scraped pages were not in English and hence, they needed to be translated. Along with the translation, other tasks such as data cleaning, feature engineering, and encoding were needed to assure the best possible dataset to feed the algorithm.
    • Training: The model can be periodically retrained with a pipeline parameter. This training was developed on all the data to predict the score for just the leads that were not assigned to any state yet. The algorithm selected to do the predictions was an Extreme Gradient Boosting. The output of this training is a pickle file used to do faster predictions.
    • Prediction: In this step, the score for each lead was output along with the explainability of the most important attribute for the decision with the LIME package.
    • PowerBi: The file resulting from the prediction was retrieved from the server and put to a PowerBi so the whole organization can use the data from the scarping, scoring, and explainability algorithms.

    Abstract

    Many organizations are still driven by intuition and experience-based decision making. With this type of decision-making, problems such as human bias, loss of experienced workers, and the reluctance to use more sophisticated information systems can be a severe problem. With the arrival of the era of data, companies have at their disposal more information than never before, but not many know how to use this resource to its full potential. In this work, we are going to develop a data science pipeline to predict the quality of the sales leads for the EMEA 3D sales department in HP, a project that aims to enhance the transition to a data-driven decision-making organization.

    In order to solve this problem, the developed pipeline was focused on two tasks. The first, involved developing a web scraping tool to obtain information not previously available on the company database or that was very time consuming to acquire due to the size of the database, of more than 40,000 leads. And second, the training of a machine learning algorithm to predict a score quality together with an explainability of the main features of the decision for every lead.

    The result of this process greatly impacted the business, all the knowledge was kept always in the company inside the machine learning model, and the explanations of each decision are making gain confidence in the model. Furthermore, the sales team used the score to make more data-driven decisions and save time by prioritizing the best quality leads. The accuracy of the trained Extreme Gradient Boosting algorithm to do the predictions proved to be a 13.45% improvement over the baseline model with a total accuracy of 0.94282 when tested on the test set.

    Lastly, all these tasks were put together as a pipeline and uploaded to a server inside HP to execute the process automatically every day with minimal human intervention. The pipeline developed proved to give very positive results for the organization and further developments are being made to enhance the results.

    Code

    Disclaimer: The data used for this project is highly sensitive and as part of the confidentiality agreement signed with HP, only minimal amounts of data with no customer details can be taken out of the company nor be externally used. As a result, no data exploration can be publicly shown. Due to this, only the output file with the predictions and explainability with no customer data can be exposed in this public domain. The other files have been left just with one or two lines to get the grasp of the fields in the dataset.

    Visit original content creator repository https://github.com/jordisc97/MSc_Data_Science-Master_Thesis
  • dialogue-latex

    LaTeX Template for the Dialogue Conference

    Proceedings of the Dialogue conference are typeset in the Adobe InDesign suite which is not compatible with LaTeX. However, it is possible to convert LaTeX documents to the InDesign format, ICML, using Pandoc. This template aims at following the conference requirements, so you can submit papers in your favourite typesetting system.

    Build Status

    This work is in the public domain and offered as-is without any warranties, see LICENSE.txt for details.

    Deprecated

    This template is deprecated. Please follow the official conference paper guidelines at https://www.dialog-21.ru/en/requirements/.

    Thanks!

    This template is brought to you by the NLPub project. If you found this template useful and would like to support its author, here is the link: https://nlpub.ru/NLPub:Support.

    Why are some parts of the document in red or blue?

    These colors help the publisher to transfer your content into the final template using InDesign. This is not a mistake and is done intentionally.

    Why cannot I use extra packages or modify dialogue.cls?

    The conversion process from LaTeX to InDesign is highly error-prone. The available features are strictly limited to provide a stable subset of functionality.

    Guidelines

    Do…

    • use the dialogue document class
    • make sure that your document compiles with zero errors
    • keep the document layout as simple as possible
    • use BibTeX to format the bibliography
    • run the conversion command on your machine

    Do Not…

    • modify dialogue.cls in any way
    • use any packages or features not included in the template
    • use \ref in the document
    • use tabular inside tabular

    Conversion

    We recommend using the following command for conversion:

    pandoc -s -f latex -t icml -o dialogue.icml --bibliography=dialogue.bib --filter pandoc-citeproc --csl=splncs.csl dialogue.tex

    This command converts your document dialogue.tex and bibliography dialogue.bib to dialogue.icml. The splncs.csl style is used for bibliography conversion (provided in the repository). Warnings that are related to formulas are OK.

    InDesign files are XML documents. Unfortunately, sometimes Pandoc produces non-well-formed XML. Please run the following command to verify this:

    xmllint --noout dialogue.icml

    If error messages are shown, try modifying the dialogue.icml file in any text editor to ensure that no errors are shown.

    Visit original content creator repository https://github.com/nlpub/dialogue-latex
  • School_District_Analysis

    School District Analysis

    Project Overview:

    Maria, who is a Chief Data Scientist for a city school district is responsible for analyzing information she gathered from different sources and in different formats. Maria is asked to prepare all standardized test data for analysis, reporting, and presentation to provide insights about performance trends and patterns. These insights will be used to inform discussions and decisions at the school and district levels.

    In this module we are asked to help Maria analyze data on student funding and students’ test scores. Various information is availabe to us from each school, such as students grades, math scores, reading scores, schools sizes’, etc..

    Our task is to go through the data, analyze it and help explain if there are any trends impacting the schools’ performances. Our results will help the schools better understand their budgets and priorities moving forward.

    Once we finished our analysis, we were informed that some of the information initially provided shows evidence of academic dishonety, so we were asked to remove and not consider the math and reading scores for a specific high schoo, “Thomas High School”, and keep the rest of the data intact.

    School District Analysis Results:

    How is the district summary affected?

    Once the Thomas High School 9th grades math and reading scores were removed from consideration, the new the district summary had a small change to their results.

    • The total school counts, total students, and total budgets remained unchanged.

    • Average Scores changed slightly:

      • Math Scores changed from 79 to 78.9.
      • Reading SCores remained at 81.9.
    • % Passing Math, Reading and Overall also slightly changed:

      • % Passing Math dropped from 75.0 74.8.
      • % Passing Reading dropped from 85.8 to 85.7.
      • % Overall Passing dropped from 65.2 to 64.9

    How is the school summary affected?

    When we replaced all Math and Reading scores for Thomas High School (THS) with NaN values, it automatically lowered the total student number for the school, the intial student count was 39,170, and changed to 38,709 after udpating the values.

    Also, the average math score, percentage of student passing math, percentage of student passing reading, the overall passing percentage were all impacted.

    • Thomas High School Average Scores changed:

      • Average Math Score changed from 83.4 to 83.3
      • Average Reading Score changed from 83.8 to 83.9
    • Thomas High School Percentages also changed:

      • % Passing Math dropped from 93.3 to 66.9.
      • % Passing Reading dropped from 97.3 to 69.7.
      • % Overall Passing dropped from 90.9 to 65.1.

    The below shows the results for the THS when only results for 10th through 12th grades were calculated.

    How does replacing the ninth graders’ math and reading scores affect THS’s performance relative to the other schools?

    In the initial analysis, Thomas High School was ranked second out of the 15 high shools, based on their overall passing percentage, with an overall passing % at 90.94%. However this percentage dropped all the way to %65.07 once the 9th grade Math and Reading scores were replaced by NaN values.

    Initial Performance:

    THS_initial_performance

    NaN Performance:

    THS_NaN_performance

    How does replacing the ninth-grade scores affect the following:

    • Math and reading scores by grade
    • Scores by school spending
    • Scores by school size
    • Scores by school type

    THS overall passing percentage improved back up to %90.63 once we only considered scores for 10th through 12th grades. New results below,

    • % Passing Math increased from 66.9 to 93.2
    • % Passing Reading increased from 69.7 to 97.0
    • % Overall Passing improved 65.1 to 90.3

    Updated Performance:

    THS_final_performance

    School District Analysis Summary:

    Summarize four changes in the updated school district analysis after reading and math scores for the ninth grade at Thomas High School have been replaced with NaNs.

    Once we replaced all Math and Reading scores for 9th grade with NaN values, we noticed changes either for the school summary or district’, even though they were minimal in some categories, but once we removed 9th grades scores’ from the analysis, we noticed that

    • THS student counts drastically dropped, as mentioned above.
    • Average Math and Reading scores improved.
    • Passing % for Math and Reading both improved.
    • OVerall percentage passing improved.
    Visit original content creator repository https://github.com/abidor13/School_District_Analysis
  • School_District_Analysis

    School District Analysis

    Project Overview:

    Maria, who is a Chief Data Scientist for a city school district is responsible for analyzing information she gathered from different sources and in different formats. Maria is asked to prepare all standardized test data for analysis, reporting, and presentation to provide insights about performance trends and patterns. These insights will be used to inform discussions and decisions at the school and district levels.

    In this module we are asked to help Maria analyze data on student funding and students’ test scores. Various information is availabe to us from each school, such as students grades, math scores, reading scores, schools sizes’, etc..

    Our task is to go through the data, analyze it and help explain if there are any trends impacting the schools’ performances. Our results will help the schools better understand their budgets and priorities moving forward.

    Once we finished our analysis, we were informed that some of the information initially provided shows evidence of academic dishonety, so we were asked to remove and not consider the math and reading scores for a specific high schoo, “Thomas High School”, and keep the rest of the data intact.

    School District Analysis Results:

    How is the district summary affected?

    Once the Thomas High School 9th grades math and reading scores were removed from consideration, the new the district summary had a small change to their results.

    • The total school counts, total students, and total budgets remained unchanged.

    • Average Scores changed slightly:

      • Math Scores changed from 79 to 78.9.
      • Reading SCores remained at 81.9.
    • % Passing Math, Reading and Overall also slightly changed:

      • % Passing Math dropped from 75.0 74.8.
      • % Passing Reading dropped from 85.8 to 85.7.
      • % Overall Passing dropped from 65.2 to 64.9

    How is the school summary affected?

    When we replaced all Math and Reading scores for Thomas High School (THS) with NaN values, it automatically lowered the total student number for the school, the intial student count was 39,170, and changed to 38,709 after udpating the values.

    Also, the average math score, percentage of student passing math, percentage of student passing reading, the overall passing percentage were all impacted.

    • Thomas High School Average Scores changed:

      • Average Math Score changed from 83.4 to 83.3
      • Average Reading Score changed from 83.8 to 83.9
    • Thomas High School Percentages also changed:

      • % Passing Math dropped from 93.3 to 66.9.
      • % Passing Reading dropped from 97.3 to 69.7.
      • % Overall Passing dropped from 90.9 to 65.1.

    The below shows the results for the THS when only results for 10th through 12th grades were calculated.

    How does replacing the ninth graders’ math and reading scores affect THS’s performance relative to the other schools?

    In the initial analysis, Thomas High School was ranked second out of the 15 high shools, based on their overall passing percentage, with an overall passing % at 90.94%. However this percentage dropped all the way to %65.07 once the 9th grade Math and Reading scores were replaced by NaN values.

    Initial Performance:

    THS_initial_performance

    NaN Performance:

    THS_NaN_performance

    How does replacing the ninth-grade scores affect the following:

    • Math and reading scores by grade
    • Scores by school spending
    • Scores by school size
    • Scores by school type

    THS overall passing percentage improved back up to %90.63 once we only considered scores for 10th through 12th grades. New results below,

    • % Passing Math increased from 66.9 to 93.2
    • % Passing Reading increased from 69.7 to 97.0
    • % Overall Passing improved 65.1 to 90.3

    Updated Performance:

    THS_final_performance

    School District Analysis Summary:

    Summarize four changes in the updated school district analysis after reading and math scores for the ninth grade at Thomas High School have been replaced with NaNs.

    Once we replaced all Math and Reading scores for 9th grade with NaN values, we noticed changes either for the school summary or district’, even though they were minimal in some categories, but once we removed 9th grades scores’ from the analysis, we noticed that

    • THS student counts drastically dropped, as mentioned above.
    • Average Math and Reading scores improved.
    • Passing % for Math and Reading both improved.
    • OVerall percentage passing improved.
    Visit original content creator repository https://github.com/abidor13/School_District_Analysis
  • Plain-Text-Reader

    Plain Text Reader

    Plain Text Reader is a desktop application for Windows that allows users to edit and read plain text files in various formats, such as .txt, .csv, .rtf, and .xml. It is built using C# and the .NET Framework, and offers a simple and intuitive interface for managing plain text files. Whether you need to quickly edit a text file or browse through a CSV spreadsheet, Plain Text Reader has got you covered.

    Technologies used

    • C# programming language
    • .NET Framework
    • Microsoft Visual Studio IDE

    Features

    • Open and edit .txt, .csv, .rtf, and .xml files
    • Save changes to existing files or create new ones
    • Edit text and fields within a file
    • Change font size and style (.rtf only)
    • Change text color (.rtf only)
    • Browse through the raw code of files

    Prerequisites

    • Windows 10 or later
    • .NET Framework 4.8 or later

    Installation

    Clone or download this repository to your local machine. Open the solution file (PlainTextReader.sln) in Microsoft Visual Studio. Build the solution by clicking on the “Build” button in the Visual Studio toolbar. Run the application by clicking on the “Start” button in the Visual Studio toolbar or by pressing the “F5” key.

    That’s it! The application should now be up and running on your local machine. If you encounter any issues during the installation process, create an issue in this repository.

    Screenshots

    Principal Screen .txt editor .rtf editor .csv editor .xml editor
    Visit original content creator repository https://github.com/Jotaherrera/Plain-Text-Reader
  • Plain-Text-Reader

    Plain Text Reader

    Plain Text Reader is a desktop application for Windows that allows users to edit and read plain text files in various formats, such as .txt, .csv, .rtf, and .xml. It is built using C# and the .NET Framework, and offers a simple and intuitive interface for managing plain text files. Whether you need to quickly edit a text file or browse through a CSV spreadsheet, Plain Text Reader has got you covered.

    Technologies used

    • C# programming language
    • .NET Framework
    • Microsoft Visual Studio IDE

    Features

    • Open and edit .txt, .csv, .rtf, and .xml files
    • Save changes to existing files or create new ones
    • Edit text and fields within a file
    • Change font size and style (.rtf only)
    • Change text color (.rtf only)
    • Browse through the raw code of files

    Prerequisites

    • Windows 10 or later
    • .NET Framework 4.8 or later

    Installation

    Clone or download this repository to your local machine. Open the solution file (PlainTextReader.sln) in Microsoft Visual Studio. Build the solution by clicking on the “Build” button in the Visual Studio toolbar. Run the application by clicking on the “Start” button in the Visual Studio toolbar or by pressing the “F5” key.

    That’s it! The application should now be up and running on your local machine. If you encounter any issues during the installation process, create an issue in this repository.

    Screenshots

    Principal Screen .txt editor .rtf editor .csv editor .xml editor
    Visit original content creator repository https://github.com/Jotaherrera/Plain-Text-Reader
  • YouTube

    About The Project

    A Youtube Clone built using React JS, API from Rapid API and Material UI.

    Watch Live Demo

    Features:

    • Video Page with controlled video playback
    • Filtering videos by categories in sidebar
    • Search video or channel feature
    • Channel Page
    • Related videos section

    Getting Started with Create React App

    This project was bootstrapped with Create React App.

    Available Scripts

    In the project directory, you can run:

    npm start

    Runs the app in the development mode.
    Open http://localhost:3000 to view it in your browser.

    The page will reload when you make changes.
    You may also see any lint errors in the console.

    npm test

    Launches the test runner in the interactive watch mode.
    See the section about running tests for more information.

    npm run build

    Builds the app for production to the build folder.
    It correctly bundles React in production mode and optimizes the build for the best performance.

    The build is minified and the filenames include the hashes.
    Your app is ready to be deployed!

    See the section about deployment for more information.

    npm run eject

    Note: this is a one-way operation. Once you eject, you can’t go back!

    If you aren’t satisfied with the build tool and configuration choices, you can eject at any time. This command will remove the single build dependency from your project.

    Instead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except eject will still work, but they will point to the copied scripts so you can tweak them. At this point you’re on your own.

    You don’t have to ever use eject. The curated feature set is suitable for small and middle deployments, and you shouldn’t feel obligated to use this feature. However we understand that this tool wouldn’t be useful if you couldn’t customize it when you are ready for it.

    Learn More

    You can learn more in the Create React App documentation.

    To learn React, check out the React documentation.

    Code Splitting

    This section has moved here: https://facebook.github.io/create-react-app/docs/code-splitting

    Analyzing the Bundle Size

    This section has moved here: https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size

    Making a Progressive Web App

    This section has moved here: https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app

    Advanced Configuration

    This section has moved here: https://facebook.github.io/create-react-app/docs/advanced-configuration

    Deployment

    This section has moved here: https://facebook.github.io/create-react-app/docs/deployment

    npm run build fails to minify

    This section has moved here: https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify

    Visit original content creator repository https://github.com/devangshrimali99/YouTube
  • photobook

    How to run the project

    • Install Homestead by following this documentation, for windows users could be easier install it using docker
    • Clone this repo in the working folder (Ex.: ~/code/project1)
    • Launch vagrant up
    • Go to Homestead.test

    Useful resources:

    Build Status Total Downloads Latest Stable Version License

    About Laravel

    Laravel is a web application framework with expressive, elegant syntax. We believe development must be an enjoyable and creative experience to be truly fulfilling. Laravel takes the pain out of development by easing common tasks used in many web projects, such as:

    Laravel is accessible, powerful, and provides tools required for large, robust applications.

    Learning Laravel

    Laravel has the most extensive and thorough documentation and video tutorial library of all modern web application frameworks, making it a breeze to get started with the framework.

    If you don’t feel like reading, Laracasts can help. Laracasts contains over 1500 video tutorials on a range of topics including Laravel, modern PHP, unit testing, and JavaScript. Boost your skills by digging into our comprehensive video library.

    Contributing

    Thank you for considering contributing to the Laravel framework! The contribution guide can be found in the Laravel documentation.

    Code of Conduct

    In order to ensure that the Laravel community is welcoming to all, please review and abide by the Code of Conduct.

    Security Vulnerabilities

    If you discover a security vulnerability within Laravel, please send an e-mail to Taylor Otwell via taylor@laravel.com. All security vulnerabilities will be promptly addressed.

    License

    The Laravel framework is open-sourced software licensed under the MIT license.

    Visit original content creator repository https://github.com/Ampsicora/photobook
  • packet-create-project

    GitHub Actions for creating projects on Packet.com

    Automate your infrastructure

    This GitHub Action will create a new project on packet.com. Projects allow you to organize groups of resources and collaborators within your organization.

    Creating projects

    With this action you can automate your workflow by provisioning projects using the packet.com api.

    To use this action you will first need an authentication token which can be generated through the Packet Portal.

    Packet.com is NOT a free service, so you will be asked to provide billing information. This action will NOT have access to that information.

    Sample workflow that uses the packet-create-project action

    # File: .github/workflows/workflow.yml
    
    on: [push]
    
    name: Packet Project Sample
    
    jobs:
      create-new-project:
        runs-on: ubuntu-latest
        name: Creating new packet project
        steps:
          - uses: mattdavis0351/packet-create-project@v1
            id: project
            with:
              API_key: ${{ secrets.PACKET_API_KEY }}
              org_name: My Packet org # if not supplied will use default org for API key
              project_name: My-new-packet-project

    Available Inputs

    Input Description Default Required
    API_key Packet.com API authorization token No key supplied
    org_name Organization to place new project in, uses default user org if not specified default
    project_name Desired name for new project GitHub Actions

    Outputs from action

    This action supplies the following outputs which can be consumed by subsequent actions in the current job.

    Output Description
    project_id ID of the newly created project returned as a string
    project_name Name of the newly created project returned as a string


    Visit original content creator repository
    https://github.com/mattdavis0351/packet-create-project

  • mr-social-assets

    mr-social-assets

    Assets and glTF asset pipeline for the Mozilla Social MR team.

    Build Script Setup:

    Install rust and node.js.

    In your terminal run:

    cargo install gltf_unlit_generator
    

    The build script uses gltf-bundle to build gltf bundles files and dependencies that are optimized for distribution on the web. Any files in the project ending with .bundle.config.json will be used by the build script to generate glTF bundles and their associated files. The output will be placed in the dist/ folder.

    Example .bundle.config.json file:

    {
      "name": "BotDefault",
      "version": "0.0.1",
      "output": {
        "filePath": "bots"
      },
      "assets": [
        {
          "name": "BotDefault",
          "src": "./BotDefault_Avatar.fbx",
          "components": ["./components.json"]
        }
      ]
    }

    The name, version, and assets properties are required.

    output.filePath determines the subdirectory to place the bundle and associated files in the dist/ directory. Files with the same name will be overwritten. This can be useful when assets have textures or binary data in common.

    The asset.src property can be a .fbx, .gltf, or .glb file. This asset file will have the following build steps applied to it before being placed in the dist/ folder:

    1. Convert from .fbx or .glb to .gltf. .fbx. Conversions are handled by FBX2glTF.

    2. Generate unlit textures and add the MOZ_alt_material extension to any materials in the .gltf file using gltf-unlit-generator.

    3. Add component data using gltf-component-data to gltf.node.extras using the supplied asset.components array. The components array can include paths to json files containing component data or JSON objects containing component data.

      Example component.json:

      {
        "scenes": {
          "Root Scene": {
            "loop-animation": {
              "clip": "idle_eyes"
            }
          }
        },
        "nodes": {
          "Head": {
            "scale-audio-feedback": ""
          }
        }
      }

      Example .bundle.config.json file:

      {
        "name": "BotDefault",
        "version": "0.0.1",
        "output": {
          "filePath": "bots"
        },
        "assets": [
          {
            "name": "BotDefault",
            "src": "./BotDefault_Avatar.fbx",
            "components": [
              "./components.json",
              {
                "nodes": {
                  "Head": {
                    "test-component": true
                  }
                }
              }
            ]
          }
        ]
      }
    4. Using gltf-content-hash, rename all referenced assets in the glTF to <contenthash>.<extension>. This ensures that cached files referenced in the .gltf can be updated. Assets shared between multiple .gltf files will have the same content hash and will be fetched from cache rather than downloaded again. .gltf files will be renamed to <gltfName>-<contentHash>.gltf so that you can easily find and preview gltf files, but still get the same cache busting functionality.

    5. Output two final .bundle.json files <bundle.name>.bundle.json and <bundle.name>-<bundle.version>-<timestamp>.bundle.json. The first bundle will always contain the most recent assets. The second will be a version-locked bundle that you can assume is immutable.

    Running the Build Script:

    In your terminal cd into the mr-social-assets directory and run:

    npm run build
    

    Alternatively on Windows you can double-click the build.bat script.

    Deploying to S3

    Place the .env file with AWS/S3 credentials in the mr-social-assets folder.

    Example .env:

    AWS_ACCESS_KEY=myaccesskey
    AWS_SECRET_ACCESS_KEY=mysecret
    S3_BUCKET=mybucket
    

    In your terminal cd into the mr-social-assets directory and run:

    npm run deploy
    

    Setting CORS Settings for your S3 Bucket

    Default CORS settings are stored in cors-config.json.

    Using the AWS CLI:

    cd mr-social-assets
    aws s3api put-bucket-cors --bucket <your bucket name> --cors-configuration file://cors-config.json
    

    License

    All assets are licensed with the Creative Commons Attribution-ShareAlike 4.0 International License.

    Code is licensed with the Mozilla Public License 2.0.

    Visit original content creator repository
    https://github.com/MozillaReality/mr-social-assets