PowerGrid AI
PowerGrid Solution S.R.L.

Energy sector data intelligence platform

Four sources,
zero agreement
on anything.
We fixed that.

TagsAI ENGINEERINGENERGYPROCESS AUTOMATIONENTITY RESOLUTIONRAG

Romanian energy engineers were cross-referencing ATR permits, contracts, commissioning records, and substation data by hand, with each source using a different name for the same power plant. We built an AI platform that resolves the data, automates grid analysis, and generates technical reports. The engineers still work in Excel. The platform handles everything else.

44M+
Entity Comparisons
4
Data Sources Unified
Days→Min
Cross-Reference Time
0
Copy-Paste Reports
5
Automation Pipelines
EU
GDPR-Native Hosting
PowerGrid AI, data unification and automation pipeline

Energy runs on data nobody can trust

Romanian grid engineers deal with four separate data sources on every project: ATR permits, customer contracts, commissioning records, and substation data. Each one was built by a different team, at a different time, with no agreement on how to name things, so the same power plant ends up with a different identifier in every file.

On top of that, there's no common format to work with. Excel, CSV, PDF, and DXF all coexist in the same project folder, and since there's no shared ID linking them, someone has to sit down and manually figure out which records belong together. That someone is always the engineer, and it takes days.

Reports get assembled by copy-pasting between spreadsheets, and the mistakes only surface when the document is already in front of the client. In energy infrastructure, that's not just an embarrassment, fixing it after delivery costs real money and damages trust.

"The same substation appeared as ‘STS Centru’, ‘ST-Centru’, and ‘STC’, all in the same project."

The data exists. It just doesn't know it's the same.

Here's the thing: every piece of data in every source is actually correct. The ATR permit isn't wrong, the commissioning record isn't wrong, and neither is the substation data. Nothing is missing or broken in isolation.

The problem is that each system evolved on its own, without any shared reference point. No common ID, no enforced naming convention, no way for one system to recognize that it's talking about the same entity as another. So the engineers end up solving that identity problem by hand, on every single project, from scratch.

That's what we had to fix first. Before you can automate anything in this workflow, you need to unify the data, and entity resolution isn't just a feature, it's the thing that makes every other capability on the platform possible.

You can't automate what you can't trust, and you can't trust what you can't identify.

44 million comparisons. Human approval for every match.

We built a fuzzy matching engine that runs across all four data sources at the same time, using multiple hashing algorithms in parallel rather than a simple string comparison, with each algorithm tuned for a different type of naming variation you'd find in Romanian energy data.

The engine processes over 44 million pairwise comparisons with high accuracy, and the architecture scales to billions on commodity hardware. The full production load runs on a low-end VPS.

Every proposed match gets a confidence score, and a high score doesn't mean it merges automatically, it surfaces as a ranked suggestion for an engineer to review. That's an intentional design choice, not a limitation.

Human-in-the-loop isn't a compromise, it's the whole point.

Every merge requires explicit sign-off. Once an engineer confirms a match, it goes into the alias table permanently: “STS Centru” = “ST-Centru” = “STC”, resolved once and never touched again.

1

Ingest

Import all four source formats (Excel, CSV, PDF, DXF) into a unified internal schema.

2

Compare

44M+ pairwise comparisons using multiple hashing algorithms optimized for energy-sector naming patterns.

3

Score

Every candidate match receives a confidence score. High-confidence clusters are surfaced first.

4

Review

Engineer approves or rejects each proposed match. No automatic merging without human sign-off.

5

Persist

Confirmed matches enter the alias table permanently. One authoritative record per entity across all sources.

Neplan in. Structured intelligence out.

Once the data foundation is unified and trusted, the automation can actually start. The first thing we tackled was grid analysis, which is usually the most technically demanding part of any project report and the one that takes the longest to produce manually.

The platform imports Neplan simulation exports directly. Neplan is the industry-standard grid simulation tool in Romania, so every project already produces this output without any workflow changes on the engineer's side.

From each export, the platform parses two scenarios: normal operation and N-1 fault conditions, which is the standard test of what happens when a single component fails. It then calculates the differences in load, voltage, and current between the two states, and automatically flags every threshold breach against configurable engineering limits.

What comes out the other side is a structured technical analysis, machine-generated and ready for an engineer to review, with no copy-pasting from raw simulation output, no manual table construction, no formatting work at all.

What used to be two days of work is now a button click

Import Neplan export, any project, any format version

Normal operation vs. N-1 fault analysis calculated automatically

Threshold breaches flagged with configurable engineering limits

Structured report generated, the engineer reviews output, not raw data

The report writes itself. The AI checks its own work.

Technical Memoranda are the formal documents that go with every energy project submission, and they used to be assembled mostly by hand. Now they auto-fill from the unified project data, with every derivable field populated automatically before the engineer ever opens the document.

But just generating the document isn't enough. After generation, the platform runs an AI verifier that cross-checks three things independently: the narrative text, the data outputs, and the economic section. If there's a conflict between any of them, the verifier flags it and proposes a correction. The engineer still makes the final call, the AI just makes sure nothing slips through.

"If the text says 95kV and the data says 110kV, the system flags it before it leaves the building."

Before this, those contradictions were caught by the client after delivery. The cost of that (revisions, delays, damage to credibility) was real. Now they get caught before anyone signs off.

Zero undetected contradictions between report text and underlying data.

Every answer grounded in your project data.

On top of the automation pipelines, the platform includes a RAG assistant that lets engineers query the entire project corpus in plain language. Every answer comes with citations pointing to the exact records it drew from, so there's no guessing and nothing is invented by a general model.

Because the entity resolution layer already unified all four data sources, the assistant doesn't see four separate datasets, it sees one coherent project record, and queries work across the full history at once.

“Which substations in this ATR permit are not yet in the commissioning records?”

Used to take hours of cross-referencing across four files. Now it's an instant, cited answer.

Every response links back to the source, so the engineer can verify any answer against the underlying record in a few seconds. The assistant is there to augment their judgment, not replace it.

Tech stack

Built for correctness, auditability, and EU-native data residency.

PythonCore Backend
FastAPIAPI Layer
PostgreSQLDatabase
SupabaseDatabase Platform
LangChainRAG Framework
Pandas / NumPyData Processing

EU-native infrastructure

All data processed and stored within the EU. GDPR-compliant by architecture, not by policy. No data ever leaves European jurisdiction.

What changed

1

For the engineers

No more manual cross-referencing between files. The system already knows which records belong together, so the question of “what is this called in the other source?” has a permanent answer.

2

For project delivery

Reports are generated automatically and grid analysis runs on import. Engineers review the output instead of assembling it from scratch.

3

For the business

Errors are caught before delivery, not after. The AI verifier finds contradictions in every document before it ever reaches the client.

What this demonstrates

Data-first thinking

We solved the identity problem before touching any automation. Entity resolution was the foundation, and every other capability in the platform sits on top of it.

Process automation at scale

Over 44 million comparisons, five automation pipelines, and grid analysis that used to take two days now runs on demand, all on commodity infrastructure.

Human-in-the-loop by design

Every merge and every document requires explicit sign-off. The platform proposes, the engineer decides. Automation that doesn't strip away accountability.

RAG grounded in real project data

The AI assistant doesn't replace engineering knowledge, it makes the entire project corpus instantly queryable, with every answer tied to a specific source record.

EU-native infrastructure

Built for regulated industries from day one. Data stays within the EU by architecture, not just by policy.

Client
Client
Client
Client

Trusted by companies across Europe

Now let's talk about your project

Let's discuss your idea, your problems and our solutions.