1
0
Fork 0
Commit graph

17 commits

Author SHA1 Message Date
Matthew Holt
9fc0c3e5c1
Work around Google Photos bug with missing ext on sidecar video files
Also fix motion picture transcoding for data files that don't have an extension, by looking up the media type of the image
2025-10-02 18:16:24 -06:00
Matthew Holt
31dd7fd6f5 Try to support multi-archive Facebook exports; fix conversation loading
Conversations with more than ~6 participants should now load properly, also faster thanks to a simplified query
2025-09-16 11:26:23 -06:00
Matt Holt
a85f47f1a3
Major processor refactor (#112)
* Major processor refactor

- New processing pipeline, vastly simplified
- Several edge case bug fixes related to Google Photos (but applies generally too)
- Major import speed improvements
- UI bug fixes
- Update dependencies

The previous 3-phase pipeline would first check for an existing row in the DB, then decide what to do (insert, update, skip, etc.), then would download data file, then would update the row and apply lots of logic to see if the row was a duplicate, etc. Very messy, actually. The reason was to avoid downloading files that may not need to be downloaded.

In practice, the data almost always needs to be downloaded, and I had to keep hacking on the pipeline to handle edge cases related to concurrency and not having the data in many cases while making decisions regarding the item/row. I was able to get all the tests to pass until the final boss, an edge case bug in Google Photos -- but a very important one that happened to be exposed by my wedding album, of all things -- exhibited, I was unable to fix the problem without a rewrite of the processor.

The problem was that Google Photos splits the data and metadata into separate files, and sometimes separate archives. The filename is in the metadata, and worse yet, there are duplicates if the media appears in different albums/folders, where the only way to know they're a duplicate is by filename+content. Retrieval keys just weren't enough to solve this, and I narrowed it down to a design flaw in the processor. That flaw was downloading the data files in phase 2, after making the decisions about how to handle the item in phase 1, then having to re-apply decision logic in phase 3.

The new processing pipeline downloads the data up front in phase 1 (and there's a phase 0 that splits out some validation/sanitization logic, but is of no major consequence). This can run concurrently for the whole batch. Then in phase 2, we obtain an exclusive write lock on the DB and, now that we have ALL the item information available, we can check for existing row, make decisions on what to do, even rename/move the data file if needed, all in one phase, rather than split across 2 separate phases.

This simpler pipeline still has lots of nuance, but in my testing, imports run much faster! And the code is easy to reason about.

On my system (which is quite fast), I was able to import most kinds of data at a rate of over 2,000 items per second. And for media like Google Photos, it's a 10x increase from before thanks to the concurrency in phase 1: up from about 3-5/second to around 30-50/second, depending on file size.

An import of about 200,000 text messages, including media attachments, finished in about 2 minutes.

My Google Photos library, which used to take almost a whole day, now takes only a couple hours to import. And that's over USB.

Also fixed several other minor bugs/edge cases.

This is a WIP. Some more cleanup and fixes are coming. For example, my solution to fix the Google Photos import bug is currently hard-coded (it happens to work for everything else so far, but is not a good general solution). So I need to implement a general fix for that before this is ready to merge.

* Round out a few corners; fix some bugs

* Appease linter

* Try to fix linter again

* See if this works

* Try again

* See what actually fixed it

* See if allow list is necessary for replace in go.mod

* Ok fine just move it into place

* Refine retrieval keys a bit

* One more test
2025-09-02 11:18:39 -06:00
Matthew Holt
336ff7fae0
Fix new lint warnings
Must have been a change in golang-ci-lint
2025-07-01 15:41:07 -06:00
Matthew Holt
41ff81ceb6
Minor enhancements, fix howStored for items deduped by data file at end of pipeline 2025-05-30 16:20:26 -06:00
Matthew Holt
ebc731d221
Vastly speed up imports ?? (WIP) 2025-05-30 11:14:09 -06:00
Matthew Holt
31f003b3d4
Fix metadata updates for items and relationships
Also relocate data files if the item's timestamp changes
2025-05-28 18:09:46 -06:00
Matthew Holt
1bd7c2a5c8
Fix several bugs related to duplicates, lat/lon tolerances, etc.
Separate altitude out from latlon in unique constraints
2025-05-25 12:36:03 -06:00
Matthew Holt
ae3a5d02b0
Field update preferences allow more control over item updates 2025-05-09 10:04:03 -06:00
Matthew Holt
ba4635cf7e
Fix data file handling
It wasn't updated properly with the big pipeline refactor
2025-05-04 13:28:20 -06:00
Matthew Holt
72c8ede971
More improvements/fixes to thumbnail jobs 2025-05-01 22:18:50 -06:00
Matthew Holt
3d2222fce2
Fix thumbnail job size count and paging; other minor fixes
Including one fix for a panic introduced by obfuscated logging during processing
2025-05-01 11:15:13 -06:00
Matthew Holt
25712e7c61
Fix thumbnail job size counts 2025-04-28 10:26:59 -06:00
Matthew Holt
f0697d2d6b
Refactor embedding jobs; enhance tooltips; upgrade gofakeit to v7
The gofakeit upgrade uses the new math/rand/v2 package, which uses uint64 more than int64, so we had to change a bunch of row IDs from int64 to uint64.
2025-04-24 16:33:41 -06:00
Matt Holt
746e5d6b5c
Refactored import flow, new import UI, thumbnails stored in timeline, etc. (close #3) (#43)
* Schema revisions for new import flow and thumbnails

* WIP settings

* WIP quick schema fix

* gallery: Image search using ML embeddings

Still very rough around the edges, but basically works.

'uv' gets auto-installed, but currently requires restarting Timelinize before it can be used.

Lots of tunings and optimizations are needed. There is much room for improvement.

Still migrating from imports -> jobs, so that part of the code and schema is still a mess.

* Implement search for similar items

* Finish import/planning rewrite; it compiles and tests pass

* Fix some bugs, probably introduce other bugs

* WIP new import planning page

* Fix Google Photos and Twitter recognition

* Finish most of import page UI; start button still WIP

* WIP: Start Import button

* Fixes to jobs, thumbnail job, import job, etc.

* Implement proper checkpointing support; jobs fixes
2024-12-06 11:03:29 -07:00
Matthew Holt
3066ddbeb9
Major linting overhaul
I've addressed most of the "fast" linters errors locally in my editor.

Some linters are broken or buggy.
2024-08-29 16:43:52 -06:00
Matthew Holt
1daf6f4157
Initial open source commit 2024-08-11 08:02:27 -06:00