Mallikarjun501

Implementation of PSSA: Homomorphic Encryption + Differential Privacy + Byzantine-Resilient Aggregation for Federated Learning on NSL-KDD

79
0
100% credibility
Found Apr 22, 2026 at 79 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

This project implements a privacy-preserving federated learning system that enables multiple clients to collaboratively train a cybersecurity model on network intrusion data without sharing raw information.

How It Works

1
🔍 Discover Secure Team Training

You find this project that lets multiple devices train an AI model together for spotting network threats, without anyone sharing their private data.

2
💻 Prepare Your Computer

You create a simple workspace on your machine by installing the needed free tools, just like setting up a new app.

3
📥 Download Practice Data

You grab the sample network data files, which split automatically into pieces for each team member.

4
🚀 Start the Team Leader

In one window, you launch the central coordinator that guides the whole training process.

5
🔗 Connect the Team Members

In five more windows, you start each edge device, and they all link up securely to the leader.

6
⚙️ Watch Collaborative Learning

Everyone trains locally with privacy protections, shares safe updates, and the leader combines them round by round, adapting as needed.

📈 Review Impressive Results

You get graphs and reports showing how well the shared model detects threats, with low data sharing and strong privacy.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 79 to 79 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is privacy-preserving-secure-aggregation-fl?

This Python repo runs a full federated learning pipeline on the NSL-KDD dataset for intrusion detection, using homomorphic encryption, differential privacy, and byzantine-resilient aggregation to keep client data private during training. Spin up one server process and five clients (via `python client.py A` through `E`) over TCP sockets; it shards data automatically, trains for 20 rounds with PyTorch models, and outputs metrics.csv plus plots on accuracy, comms (0.05MB sparse average), and privacy budgets. No raw data leaves clients—only encrypted sparse updates aggregate securely.

Why is it gaining traction?

It bundles privacy-preserving aggregation with adaptive compression and GPU-accelerated training in one workflow, outperforming FedAvg/SecAgg/DP-FL baselines on NSL-KDD privacy attacks (GLA drops to 12.5%) at low bandwidth. The `comparison.py` script reruns baselines for instant side-by-side plots, making it dead simple to validate tweaks. 79 stars reflect demand for this edge-ready, IEEE paper-aligned federated learning implementation.

Who should use this?

Cybersecurity devs building distributed intrusion detectors on NSL-KDD. FL researchers testing homomorphic encryption + differential privacy combos locally. Edge teams simulating 5-client setups with byzantine faults and noisy networks.

Verdict

Worth forking for NSL-KDD experiments—detailed README and auto-plots make it runnable in 30 minutes, despite 1.0% credibility score, 79 stars, and student-led maturity. Slow encryption (60s/round) limits scale, but baselines help spot quick wins.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.