Either the MVP will take too long to build or you’ll find out that you ended up over-engineering certain parts of the system and drown in complexity and tech debt.
But this tradeoff doesn’t need to exist anymore. We’re seeing cloud architectures start to converge more and more, around tools like gRPC and kubernetes.
I feel like I live on a completely different planet than this person, because gRPC, kubernetes, and schemas that generate schemas is NOT how I would improve velocity or scale. I appreciate the article though, it was a good read.
bryanrasmussen
maybe they're saying that thanks to technologies like gRPC and kubernetes you don't have to make the tradeoff anymore - you can have both an MVP that takes too long to build and overengineered parts of the system that drown you in complexity!
cntainer
I burst into laughter reading this, probably because it happened to me.
bryanrasmussen
It would probably be unprofessional for me to share my troubles, but let's just say there was a source of my observation.
LaserToy
OMG, +1000000
tikhonj
Velocity is relative, so maybe the real trick is to convince your competitors to use gRPC and Kubernetes :)
jcun4128
Or the fast MVP has to get rewritten
emptysea
Also used DRF in prod, the serialization in particular is super slow.
Also no asyncio isn’t great in this day and age.
Plain django, without DRF, just using JsonResponse is pretty great TBH, has all the batteries for things like auth and testing. Although having it be async would be a huge plus.
I recommend checking out pydantic and using that to enforce a schema for endpoint responses. Also much easier to understand (imho) than DRF’s serializers and you get static typing for free!
It's most of FastAPI (auto docs generation, great Pydantic integration, etc) but for Django.
emptysea
Never seen that before, looks good!
Raidion
I've found a good way to enforce schemas is do all testing against a Prism proxy. The proxy takes the OpenApi spec and basically maps the request going in/out into the model defined by the OpenApi spec. This means that if your request doesn't match the spec the test fails with a helpful error message.
Similarly, because the OpenApi spec gives examples, frontend devs can use the proxy to mock requests and responses with literally 0 code written. This parallelizes development really well because you know everyone is building to the same spec.
Running a bunch of debugs locally with the proxy does slow things up, but it's a trivial add to a containerized workflow.
Highly recommended! Parallelized development, prevents breaking changes, and provides a canonical API model that's always up to date and enforced!
Scalestein
This looks quite promising for some challenges I'm facing. Do you do anything towards having a central source of truth for API contracts? Like a single repo with the specs so that when a change is made everyone can develop against it?
Raidion
Yep, that's exactly how it goes. Repo contains contracts. Tests pull the latest version from that repo, though you're able to locally override that version to pull in a "unofficial" version.
Spec is created before any dev work begins. It might need changes when the work continues, no worries, the specs are versions and your team can decide on what stable versions are/aren't.
Zigurd
OpenAPI is hugely useful in the way you describe, as well as making system capabilities understandable to non-coder project participants. Mocking responses enables non-coders, using tools like swagger.io, to see API documentation and easily interact with an API specs. This is vastly better than other attempts, like UML, to make system architecture visible outside the group of coders on a project.
It is never too early to start making an OpenAPI spec. Technical product managers can even deliver parts of specs in this form. The tools are easy enough.
pards
It talks about the drawbacks of having a single Go struct serve as both the schema for the database and the schema for the API. It recommends (correctly) that developers should avoid this, and write two Go structs, one for the API and one for the database, and then manually write some glue code between the two
That's been the standard in just about every enterprise Java app I've worked on. Having separate internal and external data models is critical for managing change and preserving backwards-compatibility of your API.
onion2k
And what matters is what is the end result delivered to the user: can they see their dividends?
That's the happy path. If you focus on that then you will always get the impression that you're moving fast and everything is brilliant. The problem is that the happy path is the easy bit. Thinking through all the potential edge case problems is where things start feeling like you're moving a lot slower.
clavalle
Sounds a lot like what JHipster does with Yeoman generators.
stillbourne
I got hired about a year ago at my current company. There are currently 20 teams sharing a monorepo based on NX and using angular as the dev framework, the problem though was that while they were using a monorepo to commit their code they were not using any of the nx features to build and deploy the application. The entire code base for the application was written as a monolith, there was one external lib that they were just using as a bucket for random shit. The problem was that each team was in charge of a product and each product had its own dead line and deployment schedule. The issue became that when one team needed to publish its work at the same time as another team and Team A had no issues but Team B's code resulted in a production issue both Team A and Team B had to have their deployment reverted to the previous version. Everyone was upset, they were talking about abandoning angular and switching to react. Like that would have changed anything, a monolith is a monolith however you write it. So I told them y'all doing this all wrong. I broke down the application into discreet layers: Styles, Atomic Components, Features, Kernel, Data Access, and Infrastructure. I showed them how to build a module and package each one as an npm package, demoed module federation and micro front ends. Upgraded the existing repo to consume federated modules and made a new repo for the new code. All components are not written in Storybook, each "product" is broken down into the correct layers, the layers all have defined documentation talking about the artifacts that are produced at each layer, every module has its own cypress suite, everything is modular and can be installed via npm. It's been a year since I started and we just deployed the first test module remote that has everything bundled as individual packages. I've never been so happy to reorg a project. I feel like I've obtained developer nirvana. I'm now a mad prophet bringing enlightenment to the unwashed masses.
dkyu0510
Any suggested resources for doing this besides the nx docs?
But this tradeoff doesn’t need to exist anymore. We’re seeing cloud architectures start to converge more and more, around tools like gRPC and kubernetes.
I feel like I live on a completely different planet than this person, because gRPC, kubernetes, and schemas that generate schemas is NOT how I would improve velocity or scale. I appreciate the article though, it was a good read.
Also no asyncio isn’t great in this day and age.
Plain django, without DRF, just using JsonResponse is pretty great TBH, has all the batteries for things like auth and testing. Although having it be async would be a huge plus.
I recommend checking out pydantic and using that to enforce a schema for endpoint responses. Also much easier to understand (imho) than DRF’s serializers and you get static typing for free!
It's most of FastAPI (auto docs generation, great Pydantic integration, etc) but for Django.
Similarly, because the OpenApi spec gives examples, frontend devs can use the proxy to mock requests and responses with literally 0 code written. This parallelizes development really well because you know everyone is building to the same spec.
Running a bunch of debugs locally with the proxy does slow things up, but it's a trivial add to a containerized workflow.
Highly recommended! Parallelized development, prevents breaking changes, and provides a canonical API model that's always up to date and enforced!
Spec is created before any dev work begins. It might need changes when the work continues, no worries, the specs are versions and your team can decide on what stable versions are/aren't.
It is never too early to start making an OpenAPI spec. Technical product managers can even deliver parts of specs in this form. The tools are easy enough.
That's been the standard in just about every enterprise Java app I've worked on. Having separate internal and external data models is critical for managing change and preserving backwards-compatibility of your API.
That's the happy path. If you focus on that then you will always get the impression that you're moving fast and everything is brilliant. The problem is that the happy path is the easy bit. Thinking through all the potential edge case problems is where things start feeling like you're moving a lot slower.