// FOUNDRY5
Ai & Tech

Data Security in Custom Software: A Complete Guide for UK Businesses

The Security Problem Nobody Warns You About at Handover You signed off on the build. The platform launched on schedule. Users are logging in, data is flowing, the software works exactly as specified. And somewhere inside that codebase, there is an unencrypted database field, an exposed API endpoint, or a misconfigured access control sitting one […]

data security in custom software UK

The Security Problem Nobody Warns You About at Handover

You signed off on the build. The platform launched on schedule. Users are logging in, data is flowing, the software works exactly as specified. And somewhere inside that codebase, there is an unencrypted database field, an exposed API endpoint, or a misconfigured access control sitting one breach away from a £17.5 million ICO fine and six months of reputation recovery.

 

Most UK businesses don’t discover their custom software’s security gaps through a proactive audit. They discover them through an incident. A contractor email flagging something that “looked unusual.” A customer calling to ask why their account was accessed from an unfamiliar location. A legal call on a Tuesday morning that nobody planned for.

 

According to IBM’s Cost of a Data Breach Report 2024, the average cost of a data breach in the UK has reached £3.58 million per incident. That figure does not include the pipeline damage, the client churn, or the 12 months of rebuilding trust that follows. What it also does not include is the specific reality that custom software carries your data, your architecture decisions, and the specific security assumptions your development team made under deadline pressure two years ago.

 

Off-the-shelf platforms carry their own risks. But they also carry a dedicated engineering team whose entire job is patching vulnerabilities before you ever encounter them. Custom software carries no such safety net. Security in a bespoke build is only as strong as the decisions made during architecture, development, and deployment. If those decisions weren’t made deliberately, you are exposed.

 

This guide is for UK business owners, CTOs, and operations directors who want to understand what genuine security inside custom software looks like: where the most common failures happen, what the UK regulatory environment actually demands, and how to evaluate whether your current platform is truly protected or merely compliant on paper.

 

Why Custom Software Creates Unique Security Risks for UK Businesses

Custom software security failures are rarely dramatic. They are not Hollywood hacks with blinking screens and countdowns. They are quiet: a misconfigured permission that allows one user class to read another’s records, an API endpoint that accepts unauthenticated requests because it was deprioritised during the final sprint, a third-party integration that stores tokens in plain text because the developer assumed it would be replaced “later.”

 

The best security engineers build protection into the architecture from day one rather than patching it onto a finished product. Most development teams do the opposite: they build the feature first and treat security as a phase two consideration that arrives after launch, after the budget is spent, and after the vulnerabilities are already embedded.

 

There is a pattern that repeats itself across mid-market UK software builds: a company commissions a £120,000 platform, receives a working product, and assumes the delivery signals quality across every dimension. Security is invisible when it works. It only becomes visible when it fails. And by the time it fails, the architecture decisions that created the exposure are months or years old.

 

Custom software also introduces a risk that off-the-shelf products do not carry: the assumption of uniqueness. Because the platform is bespoke, there is no public CVE database tracking its vulnerabilities, no security community analysing its codebase, and no vendor pushing patches. The security posture of a custom build is entirely dependent on the team that built it and the processes they followed.

 

That dependency is where most UK businesses are underexposed. Not because their developers were incompetent, but because security was never explicitly scoped, priced, or prioritised.

 

What UK Law Actually Requires: GDPR, ICO, and the Real Compliance Picture

UK GDPR and data security obligations for UK businesses handling personal data are not aspirational guidelines. They are legally enforceable requirements with fines structured to hurt at scale: up to £17.5 million or 4% of global annual turnover, whichever is higher.

 

The UK GDPR, as retained and amended post-Brexit under the Data Protection Act 2018, requires that any system processing personal data must implement “appropriate technical and organisational measures” to protect that data. That phrase appropriate technical measures is deliberately broad, and the ICO interprets it through the lens of proportionality: the sensitivity of the data, the volume processed, and the likely impact of a breach

 

What this means practically for custom software is the following. First, any platform that handles personal data must have documented security controls, not just functioning ones. The ICO expects evidence of intentional security decisions, not just the absence of incidents. Second, data must be protected both in transit and at rest: TLS encryption for data moving between systems, and encryption at the database level for data stored on your servers. Third, access controls must follow the principle of least privilege, meaning users and system components should only access the data they need for their specific function and nothing beyond it.

 

The honest picture: most custom software platforms built for UK SMEs between 2019 and 2023 were not built with documented security controls. They were built to specification, and security was assumed rather than architected. If your platform was built during that window and has never undergone an independent security review, there is a reasonable probability that your current compliance posture does not match what the ICO would expect to see following an incident.

 

This is not alarmism. It is pattern recognition drawn from repeated exposure to what UK businesses actually find when they commission their first security audit.

 

Already evaluating development partners for a new build or a security-aware rebuild? Understanding how to choose a software agency in London is the decision that determines everything downstream including the security posture your platform launches with.

 

The Six Most Common Security Failures in Custom Software Builds

Understanding where breaches originate is more useful than understanding breach statistics in the abstract. These are the six failure modes that appear most consistently in custom software security reviews across UK mid-market businesses.

 

1. Authentication and Session Management Built as an Afterthought

Authentication is not a feature. It is the foundation of every security decision made after it. When authentication is built to minimum specification users can log in, users can log out rather than designed to resist real-world attack patterns, the entire platform inherits that weakness.

 

Consider what weak session management looks like in practice. A UK professional services firm built a client portal handling commercially sensitive documents. Session tokens were issued at login and stored in browser localStorage rather than secure HTTP-only cookies. Token expiry was set to 30 days with no idle timeout. A user left a shared computer logged in at a co-working space. The next person to sit down had full access to every document the firm had shared with that client over the preceding two years. The breach was not a hack. It was a design decision made during a Friday afternoon sprint.

 

The best development teams treat authentication architecture the way structural engineers treat load-bearing walls: they get it right before anything else is built on top of it.

 

2. Injection Vulnerabilities Left Open by Insufficient Input Validation

SQL injection has been the most documented attack vector in web application security for over two decades. It remains, according to OWASP’s 2023 Web Application Security report, among the top three vulnerabilities found in production systems. The reason it persists is not ignorance it is deadline pressure and the assumption that your specific application is an unlikely target.

 

Input validation is the process of treating every piece of data that enters your system as potentially hostile rather than assumed safe. When developers skip or abbreviate input validation to meet delivery timelines, they leave channels open through which an attacker can send crafted inputs that manipulate database queries, extract records, or escalate privileges. A UK e-commerce business processing 4,000 orders per month discovered this after a routine penetration test revealed that their custom checkout module was vulnerable to a basic SQL injection attack. Their entire customer database, including payment method references and delivery addresses, was extractable through a single malformed search query. The fix took two days. The audit that found it took six months to commission.

 

Injection vulnerabilities are not exotic. They are entirely preventable. They persist because prevention requires discipline, not genius.

 

3. Broken Access Controls: The Most Exploited Vulnerability in 2023 and 2024

OWASP has ranked broken access control as the number one web application security risk for two consecutive reporting cycles. It is also the failure mode that most surprises non-technical business owners when they encounter it during an audit.

 

Broken access control means that a user can access data, functions, or administrative capabilities they are not supposed to. Not because they hacked anything. Because the system does not properly enforce the boundaries that were assumed to exist. A multi-tenant SaaS platform built for a UK logistics company contained a broken object-level authorisation flaw: by modifying a numeric ID in the URL, any authenticated user could view the shipment records of any other customer account. The business had been operating for 14 months before the vulnerability was discovered. During that time, any user with a valid login and basic technical curiosity could have read every shipment record on the platform.

 

Access control is not just an authentication problem. It is an architectural problem. The rules governing who can see what, who can do what, and what can call what need to be defined at the data layer, enforced at the application layer, and tested at the security layer. Most custom builds implement only the application layer.

 

4. Insecure Third-Party Integrations and API Connections

Modern custom software rarely operates in isolation. It connects to payment processors, CRM systems, communication tools, analytics platforms, and data warehouses. Each integration introduces a new surface area where credentials can be exposed, data can leak in transit, or a compromised third party can be used as an entry point into your system.

 

The most common failure is credential management: API keys and service tokens stored in code repositories, environment configuration files committed to version control, or secrets passed as plain text in API requests. A development team builds a Stripe integration, hard-codes the API key during testing, and ships without rotating to environment variables. Three years later, the repository is briefly made public during a migration, and the key is harvested by an automated credential scanner within 47 minutes.

 

Ask your development partner specifically how third-party credentials are managed in your codebase. If the answer involves any mention of hard-coding, shared environment files, or keys stored in the database, you have a problem that needs addressing before the next sprint begins.

 

5. Missing Encryption for Data at Rest

Data in transit is almost universally encrypted across modern web applications: HTTPS is a baseline that most developers implement without instruction. Data at rest is different. Encrypting data stored in your database requires a deliberate architectural decision, adds complexity to queries, and introduces key management requirements that many teams defer indefinitely.

 

The result is that UK businesses often operate platforms where customer data, including names, contact details, identity documents, financial records, and behavioural data, sits in plaintext inside a database. If the database is accessed, there is no secondary protection. Everything is immediately readable. This is not a theoretical concern: the ICO’s enforcement actions include cases where fines were issued specifically because personal data was stored without encryption, even in the absence of an external breach.

 

Encryption at rest is not optional for UK businesses processing sensitive personal data. It is an appropriate technical measure under UK GDPR. If your platform does not implement it, your compliance posture is exposed regardless of whether a breach has occurred.

 

6. No Security Testing Before or After Launch

Security testing is the part of the development process that most SME projects cut first when budgets tighten or timelines compress. The reasoning is understandable: the software works, the client is happy, the delivery milestone is met. Security testing feels like an overhead rather than a deliverable.

 

This is the most expensive false economy in custom software development. Penetration testing conducted before launch typically costs between £3,000 and £8,000 for a mid-complexity web application. A breach that occurs because no penetration test was ever commissioned costs, on average, £3.58 million. The arithmetic is not complicated.

 

Security testing is not a one-time event. It is a continuous obligation. Every significant feature release, every new integration, and every change to the authentication or data access architecture should trigger a targeted security review. The best development teams build this into their sprint process rather than treating it as an optional add-on at the end of a project.

 

Building Security In Rather Than Bolting It On: The Architecture Approach

The difference between a platform that is genuinely secure and one that merely passes a basic compliance checklist comes down to when security enters the conversation. Security bolted on after delivery is remediation: expensive, disruptive, and structurally limited by the decisions already baked into the architecture. Security built in from the start is a different product entirely.

 

The best development teams treat security architecture the way civil engineers treat foundations: the most important decisions are invisible in the finished structure, but they determine everything that can be built on top. A platform with a well-designed security architecture can add features, scale throughput, and onboard new integrations without reopening fundamental questions about data exposure. A platform with a reactive security posture reopens those questions with every significant change.

 

What does security-in-by-design look like in practice?

 

Threat modelling at the architecture stage: Before a single line of code is written, a competent security-aware team maps the assets worth protecting, the likely attack vectors, and the controls required to address them. This takes time. It does not appear in a proposal as a billable line item at many agencies. Ask for it explicitly.

 

Data classification at the schema stage: Not all data carries the same risk. A customer’s email address and a customer’s medical history are both personal data, but they require different controls, different retention policies, and different breach notification timelines. The best teams classify data before the database schema is designed, so the controls are built in rather than retrofitted.

 

Security requirements in the acceptance criteria: Every user story that involves data access, authentication, or system integration should have explicit security requirements in its definition of done. “A user can log in” is a functional acceptance criterion. “A user can log in, sessions expire after 15 minutes of inactivity, and failed login attempts trigger a lockout after five consecutive failures” is a security-aware acceptance criterion. These are not the same thing.

 

Regular code review with security focus: Manual code review for security vulnerabilities catches classes of issues that automated scanners miss. It also creates a cultural norm: developers who know their code will be reviewed for security write more secure code.

 

How to Evaluate Whether Your Existing Platform Is Secure

Many UK businesses reading this are not planning a new build. They are running a platform that is live, processing data, and generating revenue, and they want to understand whether it is as secure as they assumed when they signed it off. This is the right question. Here is how to answer it without commissioning a £50,000 security programme on day one.

 

Start with a structured self-assessment. Walk through your platform with your development team and ask six specific questions. First: where does personal data live, and is it encrypted at rest? Second: how are API keys and third-party credentials stored and rotated? Third: what does your access control model look like, and has it been tested? Fourth: when was the last time authentication and session management logic was reviewed? Fifth: do you have logs that would detect unusual access patterns? Sixth: what is your incident response process if a breach is discovered?

 

If you cannot get clear answers to all six questions within 48 hours, that absence of clarity is itself a finding.

 

Commission a scoped penetration test. A scoped penetration test for a mid-complexity web application is not a six-figure engagement. For most UK SMEs, a focused test targeting authentication, access control, and the top ten OWASP vulnerabilities costs between £3,500 and £9,000, depending on the scope and the quality of the testing provider. It will tell you more about your actual security posture in five days than any internal review conducted over three months.

 

Review your third-party integration inventory. List every external service your platform connects to, the method of authentication used, and the data that flows across each connection. This exercise alone often surfaces credential management issues and data minimisation failures that would not otherwise be visible until an incident forces the question.

 

Check your ICO registration and data processing records. UK businesses processing personal data are required to register with the ICO and maintain records of processing activities. If your organisation has not reviewed these records since the platform was built, they almost certainly do not reflect your current data flows. Inaccurate processing records are themselves a compliance exposure.

 

The Ongoing Security Obligation: What Happens After Launch

Security is not a project that ends at go-live. It is an operating discipline that requires continuous attention as the software evolves, the threat landscape changes, and the regulatory environment shifts. This is the part of the security conversation that most development agencies omit from their proposals, and the part that most business owners forget to ask about.

 

Software that is secure at launch can become insecure within months. Dependencies age and develop vulnerabilities. New features introduce new attack surfaces. Third-party services update their authentication models and break existing integrations. The threat landscape shifts as attackers identify new techniques and automate them at scale.

 

The best partners treat your platform as infrastructure rather than a one-time deliverable: they maintain a dependency update schedule, conduct quarterly security reviews as part of their retainer, and flag emerging risks before they become incidents rather than after.

 

If your current development partner has never discussed ongoing security obligations with you, that is a meaningful signal about how they approach the relationship. A team that cares about the security of what they build wants to know it stays secure after they hand it over.

 

This does not require a six-figure managed security service. For most UK SMEs, ongoing security maintenance means: monthly dependency updates, quarterly review of access control configurations, annual penetration testing, and a defined incident response process that everyone involved understands. That is not a large commitment. It is a professional one.

 

It is also worth noting that your security requirements will change as your business scales. A platform processing 500 customer records operates under different risk parameters than the same platform processing 50,000. Teams usually discover this the hard way when they hit a growth milestone and realise their security architecture was designed for the business they were, not the business they are becoming.

 

If you’re planning that growth trajectory and need to move fast without accumulating technical debt, working with fast MVP development companies in London that treat security as a first-order concern from the first sprint will save you the costly rebuild that most scaling businesses eventually face.

 

When to Rebuild vs. When to Remediate

Not every platform with security gaps needs to be rebuilt. But some do. Knowing the difference saves you from two equally expensive mistakes: rebuilding a platform that needed remediation, and remediating a platform that needed to be rebuilt.

 

Remediation is appropriate when the underlying architecture is sound and the vulnerabilities are discrete: a specific endpoint that needs input validation, a specific data field that needs encryption, a specific access control rule that needs to be enforced. These are fixable without touching the foundations.

 

Rebuild is appropriate when the security failures are architectural: when the data model was not designed with access control in mind, when authentication is entangled with business logic in a way that makes it impossible to improve without rewriting both, or when the platform was built on a framework or dependency set that is no longer maintained and cannot be updated without a structural migration.

 

The honest framework: if your security review produces a list of ten isolated findings, you have a remediation project. If it produces a finding that says “the access control model is not enforceable as currently implemented,” you have an architectural problem that remediation cannot solve.

 

This is not a decision to make without technical input. It is also not a decision to make with input only from the team that built the original platform. They have a structural incentive to believe the existing architecture is salvageable. An independent review gives you the honest answer.

 

For context on what full integration security looks like at the infrastructure level, understanding how the best custom middleware developers in London approach API security and data transit controls gives you a useful benchmark when evaluating your own platform’s design.

 

What Good Security Looks Like: The Practical Checklist for UK Businesses

Before closing, here is what a properly secured custom software platform actually contains. Use this as a review checklist against your current build or as a specification requirement for your next one.

 

Authentication and session management: Multi-factor authentication available for all privileged accounts. Session tokens stored in secure HTTP-only cookies, not localStorage. Idle session timeout configured. Failed login attempt lockout implemented. Password policies enforced at the application layer, not just the UI.

 

Access control: Role-based access control designed at the data layer, not just the application layer. Object-level authorisation checks on every data retrieval operation. Administrative functions separated from standard user functions with independent authentication requirements. Access control model documented and testable.

 

Data protection: Personal data encrypted at rest using AES-256 or equivalent. All data in transit over TLS 1.2 or higher. Data minimisation applied: systems collect only the data they need for their specific function. Retention schedules defined and enforced. Data classification documented.

 

Third-party integrations: API credentials stored in environment variables, not code repositories. Credential rotation schedule in place. Integration permissions scoped to minimum required access. Third-party services reviewed annually for continued security compliance.

 

Security testing: Penetration test conducted before initial launch. Penetration test repeated annually and after significant releases. OWASP Top 10 addressed in development standards. Dependency vulnerability scanning automated in the CI/CD pipeline.

 

Incident response: Incident response plan documented and communicated to all relevant stakeholders. ICO notification obligations understood by the person responsible for data protection. Breach detection logging in place. Data breach simulation exercise conducted annually.

 

This is not an exhaustive list. It is the minimum standard a UK business processing personal data should expect from any custom software platform.

 

Frequently Asked Questions

What is the biggest security risk in custom software for UK businesses?

Broken access control is the most commonly exploited vulnerability in custom software platforms, ranked number one by OWASP for two consecutive reporting cycles. It allows authenticated users to access data or functions beyond their permitted scope, often without triggering any visible error or alert. The risk is architectural: it cannot be fixed with a patch. It requires deliberate design at the data layer.

 

Does UK GDPR require encryption for data stored in custom software?

UK GDPR requires “appropriate technical measures” to protect personal data, and the ICO treats encryption at rest as a standard expectation for any system processing sensitive personal data. A platform that stores unencrypted personal data is exposed to ICO enforcement action following a breach, regardless of whether the breach could have been prevented by other means. Encryption at rest is not optional for GDPR-compliant custom software.

 

How much does a penetration test cost for a UK SME?

A scoped penetration test for a mid-complexity UK web application typically costs between £3,500 and £9,000, depending on the scope, the number of entry points tested, and the quality of the testing provider. Organisations processing sensitive data, operating in regulated sectors, or handling high transaction volumes should budget toward the higher end of that range and conduct tests annually rather than as a one-time exercise.

 

What should I ask a development agency about security before signing?

Ask five specific questions: How do you approach threat modelling during architecture? What security requirements appear in your definition of done for user stories? How are third-party credentials managed in the codebase? Do you conduct security-focused code reviews, and who performs them? What does your post-launch security support model look like? The quality of the answers tells you more about a team’s security culture than any certification they display on their website.

 

How do I know if my existing platform has security vulnerabilities?

Commission a scoped penetration test from an independent testing provider who has no prior relationship with your development team. A self-assessment by the team that built the platform is useful as a starting point but structurally limited: they will not find what they do not know to look for. An independent penetration test will surface the vulnerabilities that matter within five working days and give you a prioritised remediation list you can act on.

 

What is the ICO’s approach to fines for security failures?

The ICO applies a proportionality test: fines reflect the sensitivity of the data exposed, the volume of individuals affected, the degree of negligence involved, and whether the organisation had taken steps to address known risks. Businesses that can demonstrate documented security controls, regular testing, and a proactive incident response receive significantly different treatment than those that cannot. The absence of documentation is as damaging as the absence of controls.

 

Security Is Not a Feature. It Is the Infrastructure Everything Else Runs On.

The businesses that get this right treat security the way they treat legal compliance or financial controls: not as something to address after the platform is working, but as a foundational requirement that shapes every decision before the first line of code is written.

 

The businesses that get it wrong treat security as a phase two consideration. They find out it was phase one when something goes wrong.

 

Your development partner’s approach to security is one of the clearest signals of how they approach quality across every other dimension of the build. A team that doesn’t raise security architecture in their initial scoping conversations is a team that has not yet been held accountable for a breach on a platform they built. That accountability changes behaviour. Waiting for it to arrive at your expense is the expensive way to learn the lesson.

 

If you are evaluating partners for a new build, a rebuild, or a security-aware modernisation of an existing platform, the starting point is understanding how to choose a software agency in London well enough to know the difference between a team that treats security as infrastructure and a team that treats it as a checkbox. That decision determines everything downstream.

 

If you are already running a platform and want to understand your actual security posture, Foundry 5 offers structured security assessment sessions: a 30-minute conversation that maps your current architecture against the risks outlined in this guide and tells you exactly where your priorities should sit.

 

Book a security assessment call with Foundry5 30 minutes, no pitch deck, no obligation. Just a direct conversation about where your platform stands and what it takes to make it genuinely secure.

 

Security is not optional. It is the infrastructure everything else runs on.

← Back to Blog
Share This LinkedIn → Twitter →
More from the blog

Keep reading.

View all articles →
London Based · Founder Focused

Enough reading. Let us build something together.

Thirty minutes. No deck required. Just your idea and what it needs to do.