Your database schema is an integral part of your application, so store your changelog files along side your application code. That will allow your existing version control system to ensure everything is kept in sync–whenever you commit your code your database changes will go with it and whenever anyone updates their code they will get the new database schema.
Branches are an integral part of developing code and your database changes need to flow along with them. Because Liquibase tracks each changeSet independently rather than relying on a single incrementing “database version”, when you merge in a branch all the new changeSets will execute as expected, even if there were “later” changeSets already executed.
When you do run into merge issues, the simple text formats used for the changelog files can be easily merged with your favorite tools.
Liquibase supports changelogs written in XML, YAML, JSON and SQL. Use whatever is most readable to you
Liquibase tracks which changelog statements have ran against each database, so once you create a changeSet you can be confident that it will be deployed through QA and production.
For best results, append a new changeSets as needed then run liquibase update to apply it to your local database. This works better than making changes to your database directly and then re-writing it as a changeSet because you are truly running the same update as everyone else will.
XML-formatted changelogs have their advantages, but many DBAs still prefer good, old fashioned SQL. If that is what you are most comfortable working with, Liquibase-formatted SQL gives you the standard changeSet tracking used in any changelog format, but lets you specify the exact SQL you want.
Normally, test data is stored in something like CSV files which are loaded into the database after it is built. The problem with this strategy is that any schema changes will usually break in your data load process, leading to hours of time spent figuring out what the test data was trying to expose and then adjusting the files to match the new schema.
Instead of loading the data into the final schema, build up your test data within your changelog file. Use loadData or standard sql to load data into the schema as it is now, then as new schema changes are appended to the changelog your test data will be migrated just like production data would be. This not only keeps you from having to continually update your CSV files, but also helps verify that existing data is handled correctly.
Use contexts and labels to mark which changeSets contain test data so they are not deployed to production.
Liquibase ships with several tools you can use to make sure changes are being applied correctly, check current database state, and that nobody is sneaking in changes out of process.
Beyond all the standard Liquibase functionality, Datical lets you