The crashes on ExFAT are caused by a bug in the MacOS ExFAT driver. It is unclear whether other OSes are affected too.
https://github.com/mattn/go-sqlite3/issues/1355
We now utilize sqlite's concurrency features by creating a write pool (size 1) and a read pool, and can eliminate our own RWMutex, which prevents reads at the same time as writes. Sqlite's WAL mode allows reads concurrent with writes, and our code is much cleaner.
Still need to do similar for the thumbnail DB.
Also could look into using prepared statements for more efficiency gains.
This does away with the experimental generation of thumbhashes during import. It's easier to generate the thumbnails and thumbhashes at the same time.
Does add a DB lock to phase1, but at this point the DB isn't the bottleneck in that phase.
Fixed several bugs introduced by the pipeline refactoring.
Updated goexif2 fork to use my latest commit which fixes not being able to find EXIF data on some JPEG images.
Embeddings now refer to the item they are for, rather than an item referring to a single embedding. This allows items to have multiple embeddings if necessary, which gives us some flexibility when models change/improve, etc.
Also reworked the Python server to use a smaller model (base siglip2 instead of so400m) so that it will fit on more GPUs, including my 4070; as well as a new "DeviceManager" that ChatGPT helped me figure out, to choose GPU when it has enough memory for it, as conditions change.
* Major processor refactor
- New processing pipeline, vastly simplified
- Several edge case bug fixes related to Google Photos (but applies generally too)
- Major import speed improvements
- UI bug fixes
- Update dependencies
The previous 3-phase pipeline would first check for an existing row in the DB, then decide what to do (insert, update, skip, etc.), then would download data file, then would update the row and apply lots of logic to see if the row was a duplicate, etc. Very messy, actually. The reason was to avoid downloading files that may not need to be downloaded.
In practice, the data almost always needs to be downloaded, and I had to keep hacking on the pipeline to handle edge cases related to concurrency and not having the data in many cases while making decisions regarding the item/row. I was able to get all the tests to pass until the final boss, an edge case bug in Google Photos -- but a very important one that happened to be exposed by my wedding album, of all things -- exhibited, I was unable to fix the problem without a rewrite of the processor.
The problem was that Google Photos splits the data and metadata into separate files, and sometimes separate archives. The filename is in the metadata, and worse yet, there are duplicates if the media appears in different albums/folders, where the only way to know they're a duplicate is by filename+content. Retrieval keys just weren't enough to solve this, and I narrowed it down to a design flaw in the processor. That flaw was downloading the data files in phase 2, after making the decisions about how to handle the item in phase 1, then having to re-apply decision logic in phase 3.
The new processing pipeline downloads the data up front in phase 1 (and there's a phase 0 that splits out some validation/sanitization logic, but is of no major consequence). This can run concurrently for the whole batch. Then in phase 2, we obtain an exclusive write lock on the DB and, now that we have ALL the item information available, we can check for existing row, make decisions on what to do, even rename/move the data file if needed, all in one phase, rather than split across 2 separate phases.
This simpler pipeline still has lots of nuance, but in my testing, imports run much faster! And the code is easy to reason about.
On my system (which is quite fast), I was able to import most kinds of data at a rate of over 2,000 items per second. And for media like Google Photos, it's a 10x increase from before thanks to the concurrency in phase 1: up from about 3-5/second to around 30-50/second, depending on file size.
An import of about 200,000 text messages, including media attachments, finished in about 2 minutes.
My Google Photos library, which used to take almost a whole day, now takes only a couple hours to import. And that's over USB.
Also fixed several other minor bugs/edge cases.
This is a WIP. Some more cleanup and fixes are coming. For example, my solution to fix the Google Photos import bug is currently hard-coded (it happens to work for everything else so far, but is not a good general solution). So I need to implement a general fix for that before this is ready to merge.
* Round out a few corners; fix some bugs
* Appease linter
* Try to fix linter again
* See if this works
* Try again
* See what actually fixed it
* See if allow list is necessary for replace in go.mod
* Ok fine just move it into place
* Refine retrieval keys a bit
* One more test
The gofakeit upgrade uses the new math/rand/v2 package, which uses uint64 more than int64, so we had to change a bunch of row IDs from int64 to uint64.
* Schema revisions for new import flow and thumbnails
* WIP settings
* WIP quick schema fix
* gallery: Image search using ML embeddings
Still very rough around the edges, but basically works.
'uv' gets auto-installed, but currently requires restarting Timelinize before it can be used.
Lots of tunings and optimizations are needed. There is much room for improvement.
Still migrating from imports -> jobs, so that part of the code and schema is still a mess.
* Implement search for similar items
* Finish import/planning rewrite; it compiles and tests pass
* Fix some bugs, probably introduce other bugs
* WIP new import planning page
* Fix Google Photos and Twitter recognition
* Finish most of import page UI; start button still WIP
* WIP: Start Import button
* Fixes to jobs, thumbnail job, import job, etc.
* Implement proper checkpointing support; jobs fixes