I asked on Discord:
Are there any tools to weave all the components together so both developers and the CI/CD tool can easily run the same suite of integration tests? I've done a PoC that takes one application in the pipeline and runs it locally while it talks to an AWS environment I've provisioned with Terraform. I intend to use docker-compose to bring up all apps altogether for integraion testing. The going is slow. How do other people tackle the same problem?
and user hubt gave me some great insights. Here is an abridged version of his/her answers:
Some people move to Kubernetes instead. there are tons of CI/CD tools built on it.
I have used shared dev environments for a long time. Not having individual dev environments has its drawbacks but it also has strengths too.
One advantage is everyone sees the same data and database. At some point you start to run into problems where your bugs are based on specific data. With individual dev environments, these bugs are hard to reproduce for everyone unless there is a way of sharing both code and data. If you have a a super solid consistent data set that you have curated and built for every test then you are miles ahead of the game. but maintaining and updating that data set and test suite is very hard to do.We have a promotion process from shared dev to staging to prod. We run continuous integration tests in dev and staging. People break those on a regular basis.
We don't have developers run them. We just [run] them automatically hourly. Our process works reasonably well. The hourly integration test cadence is driven more by how long the integration test suite takes rather than being strictly hourly.[If somebody deletes a file from the dev environment thinking it wasn't needed] they would have broken dev and we would debug dev or staging to fix things.
Admittedly this probably doesn't scale for 100+ developers. I deal with a team of a few dozen developers so it isn't quite the same as supporting a team of 4 or a team of 100+.
We also have separate unit tests for some things. those are targeted and should be 100% successful. Integration tests can fail for a variety of different reasons unrelated to someone's code. Unit tests should not. So, yes the integration tests are more general.
[Regarding creating the dev environment on every test suite run] I think that only makes sense in very tightly controlled and constrained environments like if you have an application that has no database.
It's not like we are able to recreate our production environment regularly, so in some ways you want to handle things like you do production. Recreating environments per test run would make sense if you are shipping packaged software. But as a cloud product/web service it makes less sense.
Discord user dan_hill2802 also gave his insights here and here:
We have developers able to easily build AWS environments for development and testing and tear down again when done. The pipelines use the same mechanism to build independent environments in AWS, run tests, then tear down
We aim to have applications as cloud agnostic as possible, so a lot of application development is done with docker containers locally. We also use localstack.cloud [open source community edition] for mocking some of the cloud endpoints. However, it's still not quite representative enough so we enable developers to deploy the application and all supporting infrastructure in AWS, they can also attach to their environment for debugging. The "deploy" is a wrap up of a few steps, including Terraform apply, setting up database users for the app, seeding the database with baseline data. The teardown then does the reverse of thatThe individual AWS stacks are mostly for infrastructure engineers working on the Terraform code, where local really isn't an option. We then made it available to the Devs too who were asking about being able to test in AWSThe tools we use (in order of priority)
- Terraform
- Make
- Kitchen
- Concourse CI (but could use any other CI tool)
Number 4 also allows Devs to create AWS environments without any AWS credentials.
- Upon each raised PR, a test environment is built using Terraform inside GitHubActions.
- GitHubActions checks out the code, runs the Alembic DB scripts to prepare the Postgres database, and then runs the integration tests against this DB.
- Postgres is torn down after the tests. This is partly to save money (the DB is unused except during PRs) and partly to ensure nobody messes up the Alembic scripts. That is, we ensure that DB schema can always be built from scratch.
- Created some Python code to create synthetic test data. This also uses pyarrow.parquet to read the schema of a file used in manual tests to pad the synthetic data with columns which are not used as part of the test. Athena will complain if you don't do this.
- Upload the synthetic data to an S3 bucket that Athena can see by registering it with AWS Glue (think Hive metastore). You only have to register the files once and that was done manually since this Athena instance was built manually. Automating this was left as tech debt.
- Now, the integration tests can run SQL queries through Athen with boto3.client(service_name="athena", ...). This data will be the same on every run of this test suite.
Deploying the environment is slow and can take well over 10 minutes.
- It takes approximately 4 minutes to create a Postgres DB in AWS and another 4 minutes to tear it down when done.
- It also takes a few minutes to create a Python environment (that's fresh each time we run the tests) since we need to download and installing all the dependencies.
- Finally, it appears that GitHubActions runs in the US while our DBs are in the EU and this cross-Atlantic traffic is slow (think loading hundreds of megs of baseline data into a DB).
No comments:
Post a Comment