Category: Blog

  • MXNetDotNet

    MXNetDotNet GitHub license

    Apache MXNet wrapper written in C# for Windows, MacOS and Linux
    MXNetDotNet is thin wrapper and similar with original MXNet. So you need not to suffer from complex C/C++ syntax!!

    Dependencies Libraries and Products

    License: Apache License 2.0

    Author: Apache Software Foundation

    Principal Use: A deep learning framework. Main goal of MXNetDotNet is what wraps Apache MXNet by C#.

    License: The BSD 3-Clause License

    Author: Intel Corporation, Willow Garage, Itseez

    Principal Use: Apache MXNet depends on it.

    License: The BSD 3-Clause License

    Author: Zhang Xianyi, Wang Qian, Werner Saar

    Principal Use: Apache MXNet depends on it.

    License: The MIT License

    Author: Giacomo Stelluti Scala & Contributors

    Principal Use: Parse command line argument for MXNet.Net sample/tool.

    License: BSD 3-Clause License

    Author: shimat

    Principal Use: .NET Framework wrapper for OpenCV. Some MXNet.Net sample projects depend on it.

    License

    MXNet.Net is licensed under the MIT License. See LICENSE.

    Visit original content creator repository https://github.com/takuya-takeuchi/MXNetDotNet
  • fanqiangjichang.github.io

    翻墙机场 | 8月25日20.7M/S|免费Clash/V2ray/Shadowrocket/SSR订阅节点分享 更新时间 2025-08-25 10:41:42

    所有免费clash节点都爬取自网络,请勿用于非法用途 。节点地址:点击跳转

    clash使用教程:

    科学上网详细教程

    订阅链接:

    Clash订阅链接

    V2ray订阅链接

    Sing-box订阅链接

    更多clash节点订阅 :

    高速机场推荐1【农夫山泉

    • 无视高峰,全天4K秒开,机房遍布全球,IP多多益善,99%流媒体解锁,油管、葫芦、奈菲,小电影丝般顺滑! IPLC、IEPL中转,点对点专线连接。高速冲浪,科学上网不二选择,现在注册即可免费试用!
    • 网站注册地址:【农夫山泉(点击注册)

    高速机场推荐2【星辰VPN

    • 无视高峰,全天4K秒开,机房遍布全球,IP多多益善,99%流媒体解锁,油管、葫芦、奈菲,小电影丝般顺滑! IPLC、IEPL中转,点对点专线连接。高速冲浪,科学上网不二选择,现在注册即可免费试用!
    • 网站注册地址:【星辰VPN(点击注册)

    高速机场推荐3【西游云

    • 无视高峰,全天4K秒开,机房遍布全球,IP多多益善,99%流媒体解锁,油管、葫芦、奈菲,小电影丝般顺滑! IPLC、IEPL中转,点对点专线连接。高速冲浪,科学上网不二选择,现在注册即可免费试用!
    • 网站注册地址:【西游云(点击注册)

    高速机场推荐4【狗狗加速

    • 狗狗加速作为第一家上线Hysteria1协议的机场,目前已经全面上线Hysteria2协议;不同于hy1,hy2全面优化了链接速度(0-RTT),进一步降低延迟;同时使用全新的带宽控制方式;能发挥您带宽的最大潜能!全天4K秒开,机房遍布全球,IP多多益善,99%流媒体解锁,油管、葫芦、奈菲,小电影丝般顺滑! IPLC、IEPL中转,点对点专线连接。高速冲浪,科学上网不二选择,现在注册即可免费试用!
    • 网站注册地址:【狗狗加速(点击注册)

    工具推荐

    工具 Windows MacOS Android IOS 备注
    Shadowsocks shadowsocks-win
    shadowsocks-qt5
    SSD
    ShadowsocksX-NG
    Surge
    Google Play
    Github
    SSD
    Surfboard
    Shadowrocket
    Surge4
    ShadowSocks
    QuantumultX
    IOS工具下载需要用美区的AppleID
    ShadowsocksR SSR
    HMBSbige/SSR
    SSRR
    QV2ray Qv2ray Qv2ray 官网
    V2ray V2rayN V2rayU
    V2rayX
    V2rayNG
    Actinium
    BifrostV
    kitsunebi 官网
    Outline Outline Outline Outline Outline
    Netch Netch
    SSCap SSCap4.0
    SSTap SSTap 1.0.9.7
    Sockscap64 Sockscap64
    Brook Brook Brook Brook Brook
    Clash Clash
    ClashDotNet
    Clashy
    ClashXW
    ClashMini
    ClashX Pro
    ClashX
    Clashy
    Clash ClashR ClashR文档
    trojan trojan trojan trojan
    WireGuard WireGuard WireGuard WireGuard
    lantern lantern lantern lantern
    Tor Browser Tor Browser Tor Browser Tor Browser 官网
    无界浏览 无界浏览 无界浏览
    自由門 自由門 自由門
    赛风 psiphon3 Psiphon Pro Psiphon 用户指南

    Stair SpeedTest:批量节点检测工具,支持单链接、订阅地址,支持SS\SSR\V2ray协议

    Visit original content creator repository
    https://github.com/fanqiangjichang/fanqiangjichang.github.io

  • json-bloomfilter

    Serialisable (JSON) Bloom Filter

    Build Status Code Climate

    A bloom filter implementation that is serialisable to JSON and compatible between both Ruby and Javascript. Very useful when needing to train a bloom filter in one language and using it in the other.

    Why?

    Bloom filters allow for space efficient lookups in a list, without having to store all the items in the list. This is useful for looking up tags, domain names, links, or anything else that you might want to do client side.

    What this Gem allows you to do is build a bloom filter server side, add all your entries to it, and then serialise the filter to JSON. On the client side you can then load up the serialised data into the Javascript version and use the bloom filter as is.

    All of this while not sending the entire list to the client, which is something you might not want to do for either security or efficiency reasons.

    Installation

    Ruby

    gem install json-bloomfilter
    

    Javascript

    With the gem installed run

    json-bloomfilter install
    

    and the json-bloomfilter.min.js will be copied to your local directory. If you are in a Rails project it will be copied to your app/assets/javascripts folder.

    Usage

    Ruby

    require "json-bloomfilter"
    
    # use the factory to configure the filter
    filter =  JsonBloomFilter.build 10000, 0.01 # number of expected items, desired error rate
    
    # or create a define the BloomFilter manually
    filter = JsonBloomFilter.new size: 100
    
    # and add entries
    filter.add "foo"
    filter.add "bar"
    # alternatively
    filter.add ["foo", "bar"]
    # test the entries
    filter.test "foo" #=> true
    filter.test "bar" #=> true
    filter.test "doh" #=> probably false
    
    # export the filter to a hash or json
    filter.to_json #=> hash as JSON
    config = filter.to_hash #=> { "size" => 100, "hashes" => 4, "seed" => 1234567890, "bits" => [...] }
    
    # use the hash to generate a new filter with the same config
    filter2 = JsonBloomFilter.new config
    filter2.test "foo" #=> true
    filter2.test "bar" #=> true
    filter2.test "doh" #=> probably false

    Javascript

    // use the factory to configure the filter
    filter =  JsonBloomFilter.build(10000, 0.01); // number of expected items, desired error rate
    
    // or create a define the filter manually
    filter = new JsonBloomFilter({ size: 100 });
    
    // and add entries
    filter.add("foo");
    filter.add("bar");
    // alternatively
    filter.add(["foo", "bar"]);
    // test the entries
    filter.test("foo"); //=> true
    filter.test("bar"); //=> true
    filter.test("doh"); //=> probably false
    
    // export the filter to a hash or json
    filter.toJson();  //=> hash as JSON
    config = filter.toHash(); //=> { "size" => 100, "hashes" => 4, "seed" => 1234567890, "bits" => [...] }
    
    // use the hash to generate a new BloomFilter with the same config
    filter2 = new JsonBloomFilter(config);
    filter2.test("foo"); //=> true
    filter2.test("bar"); //=> true
    filter2.test("doh") //=> probably false

    Options

    Valid options for constructor are:

    • size (default: 100), the bit size of the bit array used
    • hashes (default: 4), the number of hashes used to calculate the bit positions in the bit field
    • seed (default: current UNIX time), the seed for the hashing method

    Additionally you can pass along:

    • bits (default: null), an array with the bitfield in non-bit format. Use #to_hash to create these for your active BloomFilter.

    Credits

    Compatibilities

    Confirmed:

    • Ruby 1.8.7
    • Ruby 1.8.2
    • Ruby 1.9.3
    • Rubinius (1.8 mode)
    • Rubinius (1.9 mode)
    • REE

    Probably will work:

    • jRuby

    Contributing

    1. Fork it
    2. Create your feature branch (git checkout -b my-new-feature)
    3. Commit your changes (git commit -am ‘Add some feature’)
    4. Push to the branch (git push origin my-new-feature)
    5. Create new Pull Request

    Release notes

    • 0.1.5 Changes namespacing
    • 0.1.4 Changes .build function to take a list of items
    • 0.1.3 Adds a check for non positive capacity values on build
    • 0.1.2 Adds Zlib dependency
    • 0.1.1 Fixes a JS integer overflow issue and makes Ruby 1.8.7 compatible
    • 0.1.0 Adds travis-ci. Bumped minor release version
    • 0.0.6 Adds a factory that takes a size + error rate
    • 0.0.5 Adds installer of JS file
    • 0.0.4 Adds JS tests
    • 0.0.3 Adds Ruby tests
    • 0.0.2 Adds implementation of Ruby and JS filters
    • 0.0.1 Gem skeleton

    License

    See LICENSE

    Visit original content creator repository https://github.com/cbetta/json-bloomfilter
  • push-to-gcr-github-action

    Push to GCR GitHub Action

    An action that builds docker image and pushes to Google Cloud Registry and Google Artifact Registry.

    This action can be used to perform on every git push or every tag creation.

    Inputs

    gcloud_service_key

    The service account key of google cloud. The JSON file can be encoded in base64 or in plain text.

    Prior to version 4.1 – This field is required.

    From version 5 – This field is optional when you are using workload identity with google-github-actions/auth

    registry

    The registry where the image should be pushed. Default gcr.io.

    project_id

    The project id. This field is required.

    image_name

    The image name. This field is required.

    image_tag

    The tag for the image. To create multiple tags of the same image, provide a comma (,) separated tag name (e.g. v2.1,v2,latest).

    Default: latest.

    To use the pushed Tag Name as image tag, see the example.

    dockerfile

    The image building Dockerfile.
    If the context is not the root of the repository, Dockerfile from the context folder will be used.

    Default: ./Dockerfile.

    context

    The docker build context. Default: .

    target

    If you use a multi-stage build and want to stop building at a certain image, you can use this field. The default value is empty.

    build_args

    Pass a list of env vars as build-args for docker-build, separated by commas. ie: HOST=db.default.svc.cluster.local:5432,USERNAME=db_user

    push_only

    If you want to skip the build step and just push the image built by any previous step, use this option. The default for this is false.

    Permissions

    The service key you provided must have the Storage Admin permission to push the image to GCR.
    It is possible to use a lower access level Storage Object Admin, but it will work only if the registry is already created. You must also add the Storage Legacy Bucket Reader permission to the artifacts.<project id>.appspot.com bucket for the given service account.

    Reference 1

    Reference 2

    To create service key/account visit here

    Workload Identity Federation

    If you want to use Workload Identity Federation, follow the steps from here to set up Workload Identity Federation

    Example usage

    name: Push to GCR GitHub Action
    on: [push]
    jobs:
      build-and-push-to-gcr:
        runs-on: ubuntu-latest
        permissions:
          contents: 'read'
          id-token: 'write'
        steps:      
          - uses: actions/checkout@v3
          - name: Authenticate to Google Cloud
            id: auth
            uses: google-github-actions/auth@v2
            with:
              workload_identity_provider: projects/123123123/locations/global/workloadIdentityPools/the-workload-pool/providers/the-provider
              service_account: only-storage-object-adm@<PROJECT_ID>.iam.gserviceaccount.com
          - uses: RafikFarhad/push-to-gcr-github-action@v5-rc1
            with:
              # gcloud_service_key: ${{ secrets.GCLOUD_SERVICE_KEY }} # can be base64 encoded or plain text || not needed if you use google-github-actions/auth
              registry: gcr.io
              project_id: my-awesome-project
              image_name: backend
              image_tag: latest,v1
              dockerfile: ./docker/Dockerfile.prod
              context: ./docker

    A complete workflow example with all type of authentication flavour

    More Example

    Contribution

    • Fork
    • Implement your awesome idea or fix a bug
    • Create PR 🎉

    NB: The included workflow which tests the action’s basic functionalities needs a GitHub secret named JSON_GCLOUD_SERVICE_ACCOUNT_JSON.
    Currently, the workflow is not testable for forked repositories but I have an action item to enable this.

    Visit original content creator repository
    https://github.com/RafikFarhad/push-to-gcr-github-action

  • swiss_taxmapp

    README Taxmapp

    README

    TAXMAPP – The Ultimate Swiss Taxes Comparator

    Switzerland has been a federal republic since 1848, hence its own cantons may differ significantly in regulations, permissions, prohibitions and… taxes! This app can offer you a preview of what could be the most convenient places where to settle down in the Helvetic territory, based on a tax comparison between cantons and municipalities.

    Getting started

    Download the official GitHub repository through HTTPS with the command line

    git clone https://github.com/GioZd/swiss_taxmapp.git

    or through SSH protocol with the command line

    git clone git@github.com:GioZd/swiss_taxmapp.git

    If having any troubles or not having Git installed, just download and decompress the zip folder.

    Before launching the program, Python 3 and packages streamlit, polars, altair, geopandas and xlsx2csv must be installed (working versions for Python and for each dependency are pinned in the pyproject.toml). To launch the program, execute the following command from the project folder:

    streamlit run app.py

    If using uv software as environment manager run preferably

    uv run streamlit run app.py

    that will instantiate and activate a virtual environment containing all correct dependencies, including Python 3.13.[1]

    About Swiss cantons

    Switzerland is a Federal Republic divided into 26 cantons. For convenience the names of the cantons are mostly abbreviated with their official codes throughout the app and hereafter in the README.[2]

    Code Name Capital
    ZH Zurich Zurich
    BE Bern Bern
    LU Lucerne Lucerne
    UR Uri Altdorf
    SZ Schwyz Schwyz
    OW Obwalden Sarnen
    NW Nidwalden Stans
    GL Glarus Glarus
    ZG Zug Zug
    FR Freiburg Freiburg
    SO Solothurn Solothurn
    BS Basel-Stadt Basel
    BL Basel-Landschaft Liestal
    SH Schaffhausen Schaffhausen
    AR Appenzell Ausserrhoden Herisau
    AI Appenzell Innerrhoden Appenzell
    SG St. Gallen St. Gallen
    GR Graubünden Chur
    AG Aargau Aarau
    TG Thurgau Frauenfeld
    TI Ticino Bellinzona
    VD Vaud Lausanne
    VS Valais Sion
    NE Neuchâtel Neuchâtel
    GE Geneva Geneva
    JU Jura Delémont

    Insights of data collection and tax calculation

    Data were collected from this online tool for tax calculation supplied by the Federal Tax Administration through its internal APIs, in the form of Excel tables. The data show 4 different main patterns, briefly described below.

    1. $\Delta_s\times c_s% + \text{Tax Base}$ where $\Delta_s=\text{Value}-\text{Floor Rank(Value)}$ and $c_s%$ is a rank-relative coefficient. Income tax for BS, FR, GE, GL, GR, LU, NE, SH, SO, TG, TI, VD, VS, ZG, assets tax for AR, BE, BL, BS, FR, GE, JU, NE, SO, TI, VD, VS, ZG and the Federal Tax follow this formula. Eg. in canton Geneva (GE) a CHF 100,000 income implies a $(100,000-76,812)\times 15.5% +7,828=\text{CHF } 11,422.14$ base for further numeric processing. Tax base may be fixed by the canton or calculated iteratively starting form zero.
    2. $\sum_{i=1}^{s}\Delta_i\cdot c_i%$ that means that a CHF 100,000 income in Zurich will be divided as follows: $(6,900\times 0%) + (4,900\times 2%) + \dots + (17,400\times 8%) + (\text{remaining }24,600\times 9%)=\text{CHF }6,207$ (then further processed). Income tax for AG, AI, AR, BE, BL, JU, NW, SG, SZ, ZH and assets tax for AG, GR, SH and ZH follow this procedure.
    3. Flat tax, a unique rate factor for boundless wealth, such as income tax in OW and UR and assets tax in AI, GL, LU, NW, OW, SG, SZ, TG and UR.
    4. BL (Basel-Landschaft) pursues its own way, namely a rank-based formula. Assuming to be in the $s$-th wealth rank, the formula looks like $a_s\cdot \mathrm{Value}+b_s\cdot\mathrm{Value}\cdot\left(\log(\mathrm{Value}) – 1\right)+c_s$. Then these values are multiplied for a specific factor depending on the canton and the commune and finally summed together.

    DISCLAIMER: This app offers only approximate calculations that ignore important features and articulations of official tax computations, that take into consideration also religion, age, familiar status and so on. Furthermore, even the online tool for tax calculation supplied by the Federal Tax Administration (source of all data) claims not to be binding.[3] [4]

    Geographical data

    Geographical datasets to draw the base map in the homepage section are released periodically by the Federal Statistical Office under OPEN-BY-ASK License.[5] Coordinates are expressed by default in the CH1903+ (LV95) coordinate reference system, by which positions are measured in meters North/South-ward and East/West-ward from the old observatory of Bern, plus two different constants (one for the North direction, one for the South). Geopandas GeoDataFrame offers a method to trace back to the more familiar EPSG:4326 coordinate reference system and then project the map with the most common projection types (Mercator by Altair default). Alternatively, this dataframe has a structure that allows to be projected as-is, with type of projection equal to “identity”. This approach mantains the Swiss predefined “Swiss-grid” projection, a cylindrical projection centered in Bern.[6] However, the former method was preferred, because it doesn’t affect significantly the comparison between internal surfaces, due to Switzerland’s relatively small extension and it is more understandable.

    Enhancement proposals

    1. More accurate tax calculations:
      • computation of personal tax,
      • computation of church tax,
      • using spil factor for couples,
      • computation of various deductions (presence of children, celibacy, young age etc.).
    2. Interactive maps: due to Streamlit limitations an interaction with the mouse cursor has not been possible, but other libraries such as pydeck might be more compatible with Streamlit.
    3. Selection by travel distance (multiple data sources can be found on the Internet under Open License).
    4. Progressively populating SQLite database to collect increasingly more data without overwriting the previous.
    5. Code optimization for faster response and visualization.
    6. Translations to German, Italian, French or other languages.

    Web app

    https://taxmapp.streamlit.app

    Stable version

    v.1.1.3

    References and data sources

    [1] uv. Astral Docs. https://docs.astral.sh/uv. An extremely fast Python package and project manager, written in Rust.

    [2] Cantons of Switzerland – Wikipedia. https://en.wikipedia.org/wiki/Cantons_of_Switzerland

    [3] FTA Tax Calculator. Accessed December 18, 2024. https://swisstaxcalculator.estv.admin.ch. Data source and documentation.

    [4] Federal tax administration FTA. https://www.estv.admin.ch/estv/en/home.html. FTA website for more detailed readings.

    [5] Base Maps – Federal Statistical Office FSO. https://www.bfs.admin.ch/bfs/en/home/statistics/regional-statistics/base-maps.html. Geographical databases for statistical mapping.

    [6] Geodetic Reference systems – Federal Office of Topography (swisstopo). https://www.swisstopo.admin.ch/en/geodetic-reference-systems. Deeper information about the Swiss reference system.

    License and Credits

    The code is under MIT License.

    Data hereby provided are under their respective licenses and ownerships.


    Giovanni Zedda, BSc student at the Department of Statistical Sciences

    University of Padua, 19 December 2024


    Copyright (c) 2024 Giovanni Zedda

    Visit original content creator repository https://github.com/GioZd/swiss_taxmapp
  • Process-Scheduling-Simulation

    Simulation Scheduling Project

    Overview

    This project simulates different process scheduling policies by running CPU-bound and I/O-bound tasks under various scheduling strategies. The objective is to analyze how different scheduling algorithms impact the turnaround time of processes.

    Files Overview

    Source Code Files

    • cpu_bound.c: Implements a CPU-intensive computation to simulate CPU-bound tasks.
    • io_bound.c: Simulates an I/O-heavy process by performing repeated I/O operations.
    • main.c:
      • Manages process creation and execution.
      • Sets scheduling policies (SCHED_OTHER, SCHED_RR, SCHED_FIFO).
      • Assigns tasks based on the specified I/O ratio.
      • Measures turnaround time for analysis.

    Supporting Files

    • Makefile:
      • Defines rules for compiling the project.
      • Provides make build and make clean targets.
    • simulation.sh:
      • Automates compilation and execution.
      • Runs simulations with different scheduling policies and I/O ratios.
      • Stores results in simulation_results.txt.
    • simulation_results.txt: Stores the output logs of the simulations for later analysis.

    Prerequisites

    Ensure that gcc is installed on your system. This project is intended for Linux-based operating systems. To install the necessary dependencies, run:

    sudo apt update && sudo apt install gcc make

    Compilation & Execution

    1. Build the Project

    Compile the source code using:

    make build

    This will generate the following binaries:

    • cpu_bound
    • io_bound
    • main

    2. Run the Simulation

    Since modifying scheduling policies requires root privileges, execute the script with:

    sudo ./simulation.sh

    3. View Results

    After execution, check the simulation results using:

    cat simulation_results.txt

    How It Works

    1. The script first ensures it is executed with root privileges.
    2. It compiles the project using make build.
    3. The script iterates through different I/O ratios and scheduling policies:
      • I/O Ratios Tested: 0.1, 0.6
      • Scheduling Policies: SCHED_OTHER, SCHED_RR, SCHED_FIFO
    4. For each combination, the main program is executed, simulating 50 processes.
    5. The turnaround time for each process is recorded and stored in simulation_results.txt.

    Customization

    You can modify the simulation parameters in simulation.sh:

    • Adjust the io_ratios array to test different I/O ratios.
    • Change the scheduling_policies array to include or exclude policies.

    Example Output Format

    The results file (simulation_results.txt) will contain entries like:

    IO_RATIO=0.1, POLICY=SCHED_RR
    PID       Type       Start Time      End Time        Turnaround Time
    12345     IO         0.123456        0.567890        0.444434
    ...
    Average Turnaround Time: 0.345678
    -----------------------------------------
    

    Notes

    • The main.c file creates 50 child processes, assigning them as CPU-bound or I/O-bound based on the specified ratio.
    • cpu_bound.c executes a loop-intensive task, while io_bound.c performs periodic file writes and sleeps.
    • The scheduling policy affects process execution order and turnaround time.

    Cleaning Up

    To remove compiled binaries and reset the project, run:

    make clean

    Conclusion

    This simulation provides insights into how different scheduling strategies impact CPU-bound and I/O-bound processes. The recorded results help analyze the efficiency of scheduling algorithms under varying workloads.

    Visit original content creator repository
    https://github.com/faezehghiasi/Process-Scheduling-Simulation

  • laravel-google-cloud-storage

    Google Cloud Storage filesystem driver for Laravel

    Latest Version on Packagist run-tests Total Downloads


    Google Cloud Storage filesystem driver for Laravel 9 and above (using Flysystem v3 and its own GCS adapter).

    Looking for Laravel 8 support? Use the v1 branch!

    Support us

    We invest a lot of resources into creating best in class open source packages. You can support us by buying one of our paid products.

    We highly appreciate you sending us a postcard from your hometown, mentioning which of our package(s) you are using. You’ll find our address on our contact page. We publish all received postcards on our virtual postcard wall.

    Installation

    You can install the package via composer:

    composer require spatie/laravel-google-cloud-storage

    Next, add a new disk to your filesystems.php config:

    'gcs' => [
        'driver' => 'gcs',
        'key_file_path' => env('GOOGLE_CLOUD_KEY_FILE', null), // optional: /path/to/service-account.json
        'key_file' => [], // optional: Array of data that substitutes the .json file (see below)
        'project_id' => env('GOOGLE_CLOUD_PROJECT_ID', 'your-project-id'), // optional: is included in key file
        'bucket' => env('GOOGLE_CLOUD_STORAGE_BUCKET', 'your-bucket'),
        'path_prefix' => env('GOOGLE_CLOUD_STORAGE_PATH_PREFIX', ''), // optional: /default/path/to/apply/in/bucket
        'storage_api_uri' => env('GOOGLE_CLOUD_STORAGE_API_URI', null), // see: Public URLs below
        'api_endpoint' => env('GOOGLE_CLOUD_STORAGE_API_ENDPOINT', null), // set storageClient apiEndpoint
        'visibility' => 'public', // optional: public|private
        'visibility_handler' => null, // optional: set to \League\Flysystem\GoogleCloudStorage\UniformBucketLevelAccessVisibility::class to enable uniform bucket level access
        'metadata' => ['cacheControl' => 'public,max-age=86400'], // optional: default metadata
    ],

    Usage

    $disk = Storage::disk('gcs');
    
    $disk->put('avatars/1', $fileContents);
    $exists = $disk->exists('file.jpg');
    $time = $disk->lastModified('file1.jpg');
    $disk->copy('old/file1.jpg', 'new/file1.jpg');
    $disk->move('old/file1.jpg', 'new/file1.jpg');
    $url = $disk->url('folder/my_file.txt');
    $url = $disk->temporaryUrl('folder/my_file.txt', now()->addMinutes(30));
    $disk->setVisibility('folder/my_file.txt', 'public');

    See https://laravel.com/docs/master/filesystem for full list of available functionality.

    Authentication

    The Google Client uses a few methods to determine how it should authenticate with the Google API.

    1. If you specify a path in the key key_file_path in disk config, that json credentials file will be used.

    2. If the GOOGLE_APPLICATION_CREDENTIALS env var is set, it will use that.

      putenv('GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json');
    3. It will then try load the key file from a ‘well known path’:

      • windows: %APPDATA%/gcloud/application_default_credentials.json
      • others: $HOME/.config/gcloud/application_default_credentials.json
    4. If running in Google App Engine, the built-in service account associated with the application will be used.

    5. If running in Google Compute Engine, the built-in service account associated with the virtual machine instance will be used.

    6. If you want to authenticate directly without using a json file, you can specify an array for key_file in disk config with this data:

      'key_file' => [
          'type' => env('GOOGLE_CLOUD_ACCOUNT_TYPE'),
          'private_key_id' => env('GOOGLE_CLOUD_PRIVATE_KEY_ID'),
          'private_key' => env('GOOGLE_CLOUD_PRIVATE_KEY'),
          'client_email' => env('GOOGLE_CLOUD_CLIENT_EMAIL'),
          'client_id' => env('GOOGLE_CLOUD_CLIENT_ID'),
          'auth_uri' => env('GOOGLE_CLOUD_AUTH_URI'),
          'token_uri' => env('GOOGLE_CLOUD_TOKEN_URI'),
          'auth_provider_x509_cert_url' => env('GOOGLE_CLOUD_AUTH_PROVIDER_CERT_URL'),
          'client_x509_cert_url' => env('GOOGLE_CLOUD_CLIENT_CERT_URL'),
      ],

    Public URLs

    The adapter implements a getUrl($path) method which returns a public url to a file.

    Note: Method available for Laravel 5.2 and higher. If used on 5.1, it will throw an exception.

    $disk = Storage::disk('gcs');
    $url = $disk->url('folder/my_file.txt');
    // http://storage.googleapis.com/bucket-name/folder/my_file.txt

    If you configure a path_prefix in your config:

    $disk = Storage::disk('gcs');
    $url = $disk->url('folder/my_file.txt');
    // http://storage.googleapis.com/bucket-name/path-prefix/folder/my_file.txt

    If you configure a custom storage_api_uri in your config:

    $disk = Storage::disk('gcs');
    $url = $disk->url('folder/my_file.txt');
    // http://your-custom-domain.com/bucket-name/path-prefix/folder/my_file.txt

    For a custom domain (storage api uri), you will need to configure a CNAME DNS entry pointing to storage.googleapis.com.

    Please see https://cloud.google.com/storage/docs/xml-api/reference-uris#cname for further instructions.

    Temporary / Signed URLs

    With the latest adapter versions, you can easily generate a signed URLs for files that are not publicly visible by default.

    $disk = Storage::disk('gcs');
    $url = $disk->temporaryUrl('folder/my_file.txt', now()->addMinutes(30));
    // https://storage.googleapis.com/test-bucket/folder/my_file.txt?GoogleAccessId=test-bucket%40test-gcp.iam.gserviceaccount.com&Expires=1571151576&Signature=tvxN1OS1txkWAUF0cCR3FWK%seRZXtFu42%04%YZACYL2zFQxA%uwdGEmdO1KgsHR3vBF%I9KaEzPbl4b7ic2IWUuo8Jh3IoZFqdTQec3KypjDtt%02DGwm%OO6pWDVV421Yp4z520%o5oMqGBtV8B3XmjW2PH76P3uID2QY%AlFxn23oE9PBoM2wXr8pDXhMPwZNJ0FtckSc26O8PmfVsG7Jvln%CQTU57IFyB7JnNxz5tQpc2hPTHbCGrcxVPEISvdOamW3I%83OsXr5raaYYBPcuumDnAmrK%cyS9%Ky2fL2B2shFO2cz%KRu79DBPqtnP2Zf1mJWBTwxVUCK2jxEEYcXBXtdOszIvlI6%tp2XfVwYxLNFU

    Uniform bucket-level access

    Google Cloud Storage allows setting permissions at the bucket level i.e. “Uniform bucket-level access”.

    Initially, the error “Cannot insert legacy ACL for an object when uniform bucket-level access is enabled” is observed.

    When uploading files to such buckets ensure the visibility_handler within the configuration file is set as follows:

    'visibility_handler' => \League\Flysystem\GoogleCloudStorage\UniformBucketLevelAccessVisibility::class,

    Please see https://cloud.google.com/storage/docs/access-control/signed-urls and https://laravel.com/docs/6.x/filesystem for more info.

    Testing

    composer test

    Changelog

    Please see CHANGELOG for more information on what has changed recently.

    Contributing

    Please see CONTRIBUTING for details.

    Security Vulnerabilities

    Please review our security policy on how to report security vulnerabilities.

    Credits

    License

    The MIT License (MIT). Please see License File for more information.

    Visit original content creator repository https://github.com/spatie/laravel-google-cloud-storage
  • ChatGPT_DAN

    Visit original content creator repository
    https://github.com/suhailroushan13/ChatGPT_DAN

  • Computer-vision-tracking-system

    🔬 Research Contribution: Multi-Object Tracking for Precision Poultry Farming

    Role: Research Assistant
    Institution: University of Georgia
    Project Title: Enhancing Multi-Object Tracking of Broiler Chickens using Deep Learning, Machine Learning, and Computer Vision


    🧠 Overview

    Contributed to the development of a robust, real-time, identity-preserving AI tracking system for broiler chickens in commercial poultry farms. The goal was to improve behavior analysis, tracking reliability, and animal welfare using modern deep learning and ML pipelines.


    🚀 Technical Highlights

    1. Object Detection & Optimization

    • Trained and benchmarked 10 YOLO variants
    • Best model: YOLOv11x
      • Precision: 0.968
      • Recall: 0.960
      • mAP@50: 0.986
      • mAP@50–95: 0.805
    • Applied L1 unstructured pruning for latency reduction
      • Inference Speed: Improved from 46.5 FPS → 60 FPS
      • Pruning Ratio: 0.09

    2. Deep Feature Extraction & Re-Identification

    Designed a hybrid deep feature extractor using:

    • Vision Transformer (ViT)
    • ResNet152
    • DenseNet201

    Embedding Evaluation Metrics:

    • Cosine Similarity: 0.956 ± 0.032
    • Euclidean Distance: 0.020 ± 0.007

    3. Kinematics-Aware Identity Classification

    Developed classifiers using features like velocity, acceleration, and displacement. Benchmarked 15 ML models, including:

    • Logistic Regression, Random Forest, Extra Trees Classifier (Best)
    • Gradient Boosting, XGBoost, LightGBM, CatBoost, AdaBoost
    • K-Nearest Neighbors (KNN), Support Vector Machine (SVM)
    • Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA)
    • Decision Tree, Naive Bayes, Multilayer Perceptron (MLP)

    Top Performer: Extra Trees Classifier

    • Accuracy: 0.917
    • Precision: 0.958
    • Recall: 0.920
    • F1 Score: 0.939

    4. Multi-Object Tracking System

    Evaluated and optimized 6 tracking algorithms:

    • DeepSORT, StrongSORT, SMILEtrack, OC-SORT, ByteTrack, Modified ByteTrack

    Final Pipeline Metrics:

    • MOTA: 0.904 ± 0.073
    • MOTP: 0.953 ± 0.057
    • Tracking Speed: 30.1 ± 3.3 FPS
    • Continuous Duration: Up to 17.3 minutes

    📈 Impact & Deployment

    Tracked over 5,700 broiler chickens under diverse real-world conditions including:

    • Lighting variability
    • Occlusions
    • Region-specific zones (feeder, drinker, open floor)

    Enabled:

    • Long-term identity preservation
    • Automated behavior monitoring
    • Precision livestock farming integrations

    This project bridged Computer Vision, ML, and Precision Agriculture, delivering a high-accuracy, scalable pipeline to advance smart farming and animal welfare monitoring systems.

    Visit original content creator repository
    https://github.com/saiakshitha33/Computer-vision-tracking-system

  • wheelbot


    Wheelbot

    This repository contains all files required to build a Wheelbot.

    Project page: https://sites.google.com/view/wheelbot/start

    Paper link: https://arxiv.org/abs/2207.06988

    🎉 The Mini Wheelbot – an improved version of the Wheelbot is now available! 🎉
    github.com/wheelbot/mini-wheelbot

    Contains the CAD files of the Wheelbot v2.5 as stl-files.

    3D printer used: Markforged Onyx One
    Material: Onyx
    Layer height: 0.2 mm
    Use default settings for all else.
    See the following links for tips on printing:
    https://markforged.com/resources/learn/design-for-additive-manufacturing-plastics-composites/3d-printing-strategies-for-composites/composites-3d-printing-design-tips
    https://static.markforged.com/downloads/CompositesDesignGuide.pdf

    The folder also contains the technical drawing for the cut copper rings forming the reaction wheel and the pdf “wheelbot v2.5 assembly view.pdf” that lets you interact with the complete Wheelbot’s assembly (requires Adobe Acrobate reader).

    Contains the circuit layouts of the motherboard that connects to the Maevarm M2 and which also supplies the uDriver-v2 with power.

    Contains the matlab files used to symbolically derive the EOM of the Wheelbot.

    FlywUni_symbolic_derive.m: Derives the Euler-Lagrange equations of a unicycle reaction wheel robot using the parameters set in “config_FlyUni” and the “Langrange.m” defined in external libraries.

    FlywUni_symbolic_linearize.m: Reshapes the symbolic ODE obtained from “FlywUni_symbolic_derive”, brings it into the form for the standard matlab ODE solvers and saves it as

    save('EQS_matrices_nonlin.mat', 'M_nonlin', 'RHS_nonlin')

    Also linearizes the set of equations and saves the linearization as

    save('EQS_matrices.mat', 'M', 'RHS')

    FlywUni_symbolic_analyze.m: Loads the pre-computed dynamic equations, computes a trajectory using a standard ODE solver (RK45), and plots trajectory and total energy of the system.

    Contains the simulink model used to tune the estimator and LQR controller. Recommended matlab version is R2020a.

    The file “s00_config” contains the simulation settings including the exact estimates for the mass and inertia of the Wheelbot v2.5 which we obtained from CAD.

    Before running “s01_unicycle.slx”, you need to run “s00_designLQR”.

    Contains the firmware required to run Wheelbot v2.5

    /firmware/M2-on-wheelbot

    Contains the firmware that runs on a Maevarm M2 that is attached to the motherboard of the Wheelbot.

    /firmware/M2-wifi-dongle

    Contains the firmware that runs on a Maevarm M2 being connected to a PC and that handles the wifi communication with the Wheelbot.

    /firmware/computer-python-interface

    Python program running on Ubuntu 18.04 LTS collecting incoming data/outgoing commands from/to the wifi dongle.

    BibTex

    @ARTICLE{geist2022wheelbot,
      author={Geist, A. Ren\'{e} and Fiene, Jonathan and Tashiro, Naomi and Jia, Zheng and Trimpe, Sebastian},
      journal={IEEE Robotics and Automation Letters}, 
      title={The Wheelbot: A Jumping Reaction Wheel Unicycle}, 
      year={2022},
      volume={7},
      number={4},
      pages={9683-9690}
    }
    

    Erratum

    In the initial publication, in Equation (3), the transform from averaged body rates ${}^{\text{B}}\omega_i$ to Euler rates was incorrectly denoted as

    $$\begin{bmatrix}
    \dot{q}_{1, \mathrm{G}} \\\
    \dot{q}_{2, \mathrm{G}} \\\
    \dot{q}_{3, \mathrm{G}}
    \end{bmatrix} = \begin{bmatrix}
    e_1^{\mathrm{T}} R_2 \\\
    e_2^{\mathrm{T}} \\\
    e_3^{\mathrm{T}} R_1 R_2
    \end{bmatrix} \sum_{k=1}^4 \frac{{ }^B \omega_i(k)}{4},$$

    corresponding to

    $$\left[\begin{array}{c}
    \dot{q}_{1, \mathrm{G}} \\\
    \dot{q}_{2, \mathrm{G}} \\\
    \dot{q}_{3, \mathrm{G}}
    \end{array}\right]=\left[\begin{array}{c}
    R_2^T e_1 &
    e_2 &
    R_2^T R_1^T e_3
    \end{array}\right]^{\top} \sum_{i=1}^4 \frac{{ }^B \omega_i(k)}{4}.$$

    However, the correct transform from averaged body rates ${}^{\text{B}}\omega_i$ to Euler rates is

    $$\left[\begin{array}{c}
    \dot{q}_{1, \mathrm{G}} \\\
    \dot{q}_{2, \mathrm{G}} \\\
    \dot{q}_{3, \mathrm{G}}
    \end{array}\right]=\left[\begin{array}{c}
    R_2^T e_1 &
    e_2 &
    R_2^T R_1^T e_3
    \end{array}\right]^{-1} \sum_{i=1}^4 \frac{{ }^B \omega_i(k)}{4}.$$

    While we implemented the faulty transform in the Wheelbot’s state estimation routine (see main.c, Line 340), the error had not been noticed during experimentation as the robot remained close to its upright equilibrium. In turn, $q_1$ and $q_2$ remained near zero such that both transforms became almost identical, as

    $$\left[\begin{array}{c}
    R_2^T e_1 &
    e_2 &
    R_2^T R_1^T e_3
    \end{array}\right]^{\top} = \left[\begin{array}{c}
    \cos(q_2) & 0 & \sin(q_2) \\\
    0 & 1 & 0 \\\
    -\cos(q_1) \sin(q_2) & \sin(q_1) & \cos(q_1) \cos(q_2)
    \end{array}\right],$$

    $$\left[\begin{array}{c}
    R_2^T e_1 &
    e_2 &
    R_2^T R_1^T e_3
    \end{array}\right]^{-1} = \left[\begin{array}{c}
    \cos(q_2) & 0 & \sin(q_2) \\\
    \tan(q_1) \sin(q_2) & 1 & -\tan(q_1) \cos(q_2) \\
    -\sin(q_2) / \cos(q_1) & 0 & \cos(q_2) / \cos(q_1)
    \end{array}\right].$$

    Importantly, our results on tilt estimation using accelerometers as depicted in Figure 10 are not affected by this error. In the first experiment (Figure 10, left) the robot’s tilt angles were kept at zero. In the second experiment (Figure 10, right), $q_1 \approx 0$ such that $\tan(q_1) \approx q_1$. In turn, both transforms deviated only marginally from each other in both experiments.

    We added a Jupyter notebook to the projects Github repository detailing the calculation of the transform from body rates to Euler rates.

    Visit original content creator repository
    https://github.com/AndReGeist/wheelbot