Amazon RDS PostgreSQL-compatible edition 13.x reached the end of standard support on February 28, 2026. What sounds like routine maintenance has instead created a tangled problem for businesses running data pipelines on AWS infrastructure.
PostgreSQL 14, which shipped in 2021, defaults to a more secure password authentication scheme (SCRAM-SHA-256). The shift makes sense from a security standpoint. Older versions rely on weaker encryption that leaves databases more exposed to compromise. AWS was right to encourage the upgrade.
But here is the bind: PostgreSQL 14 breaks AWS Glue, their managed ETL service, which cannot handle that authentication scheme. If you upgrade your RDS database to follow AWS's own security guidance, AWS's own data pipeline tooling responds with "Authentication type 10 is not supported" and stops working.
When you move to a newer PostgreSQL on RDS, Glue's connection-testing infrastructure uses an internal driver that predates the newer authentication support. The "Test Connection" button, which you'd click to verify your setup works before trusting it with production data, simply doesn't. A community expert on AWS's support forum acknowledged three years ago that "the tester is pending a driver upgrade," and assured users that crawlers use their own drivers and should work fine. Users in the same thread reported back that the crawlers also fail.
This is not an edge case. Running Glue against RDS PostgreSQL is a bread-and-butter data engineering pattern, a well-paved path that AWS has let fall into disrepair. The incompatibility has been known since PostgreSQL 14 shipped in 2021. Yet neither the RDS team nor the Glue team moved to fix the gap before the deprecation deadline arrived.
Why This Happened
AWS has tens of thousands of engineers organised into hundreds of semi-autonomous service teams. The RDS team ships deprecations on the RDS lifecycle, the Glue team maintains driver dependencies on the Glue roadmap, and nobody explicitly owns the gap between them. This is not malice or a deliberate revenue play. This is not a conspiracy, as AWS lacks the internal cohesion needed to pull one of those off. This is also not a carefully-constructed revenue-enhancement mechanism, because the Extended Support revenue is almost certainly a rounding error on AWS's balance sheet compared to the customer ill-will it generates. It is simply the cost of enormous scale; when an organisation grows large enough, internal coordination becomes harder, not easier.
The reality is harsh for anyone managing production databases. The customer discovers the incompatibility in production, usually at an inconvenient hour. By that point, the deadline has passed.
The Available Fixes All Carry Costs
You can downgrade password encryption on your database to the older, less secure standard: the one you just upgraded away from, per AWS's own recommendations. You can bring your own JDBC driver, which disables connection testing and may not support all the features you want. Or you can rewrite your ETL workflows as Python shell jobs. Each option means giving up the value proposition of a managed service or walking back the security improvement you were told to make.
For customers who stayed on PostgreSQL 13 specifically to avoid this problem, there is another path: pay for Extended Support. Charges amount to USD$0.10 per vCPU-hour for the first two years. For a 16-vCPU Multi-AZ instance, this works out to roughly USD$14,000 per year in the first two years, then doubles to USD$20,000 annually after that. These charges were automatically applied to databases that did not explicitly opt out at cluster creation time, a detail easy to miss when managing infrastructure at scale.
The pricing itself is not a conspiracy. AWS has real costs associated with maintaining older software versions and providing security patches after the community stops supporting them. But the structure creates a situation where customers face a bill from a company that also controlled the timeline for fixing the underlying problem, and all customer response options are bad.
A Broader Pattern
This is not unique to this specific authentication issue. This is simply organisational complexity doing what organisational complexity does. It's the same reason your company's internal tools don't talk to each other; AWS is just doing it at a scale where the blast radius is someone else's production database.
For organisations running critical infrastructure on cloud platforms, the lesson is clear: test thoroughly before deadlines arrive, and assume that integration gaps between services may not be resolved by the time official support ends. The alternative is either staying on unsupported versions at escalating cost, or gambling that workarounds will be available when you need them.
AWS has provided a detailed explanation of the PostgreSQL 13 deprecation timeline, and customers can explore Extended Support options or consult the latest RDS PostgreSQL version documentation for upgrade paths.