* Major processor refactor
- New processing pipeline, vastly simplified
- Several edge case bug fixes related to Google Photos (but applies generally too)
- Major import speed improvements
- UI bug fixes
- Update dependencies
The previous 3-phase pipeline would first check for an existing row in the DB, then decide what to do (insert, update, skip, etc.), then would download data file, then would update the row and apply lots of logic to see if the row was a duplicate, etc. Very messy, actually. The reason was to avoid downloading files that may not need to be downloaded.
In practice, the data almost always needs to be downloaded, and I had to keep hacking on the pipeline to handle edge cases related to concurrency and not having the data in many cases while making decisions regarding the item/row. I was able to get all the tests to pass until the final boss, an edge case bug in Google Photos -- but a very important one that happened to be exposed by my wedding album, of all things -- exhibited, I was unable to fix the problem without a rewrite of the processor.
The problem was that Google Photos splits the data and metadata into separate files, and sometimes separate archives. The filename is in the metadata, and worse yet, there are duplicates if the media appears in different albums/folders, where the only way to know they're a duplicate is by filename+content. Retrieval keys just weren't enough to solve this, and I narrowed it down to a design flaw in the processor. That flaw was downloading the data files in phase 2, after making the decisions about how to handle the item in phase 1, then having to re-apply decision logic in phase 3.
The new processing pipeline downloads the data up front in phase 1 (and there's a phase 0 that splits out some validation/sanitization logic, but is of no major consequence). This can run concurrently for the whole batch. Then in phase 2, we obtain an exclusive write lock on the DB and, now that we have ALL the item information available, we can check for existing row, make decisions on what to do, even rename/move the data file if needed, all in one phase, rather than split across 2 separate phases.
This simpler pipeline still has lots of nuance, but in my testing, imports run much faster! And the code is easy to reason about.
On my system (which is quite fast), I was able to import most kinds of data at a rate of over 2,000 items per second. And for media like Google Photos, it's a 10x increase from before thanks to the concurrency in phase 1: up from about 3-5/second to around 30-50/second, depending on file size.
An import of about 200,000 text messages, including media attachments, finished in about 2 minutes.
My Google Photos library, which used to take almost a whole day, now takes only a couple hours to import. And that's over USB.
Also fixed several other minor bugs/edge cases.
This is a WIP. Some more cleanup and fixes are coming. For example, my solution to fix the Google Photos import bug is currently hard-coded (it happens to work for everything else so far, but is not a good general solution). So I need to implement a general fix for that before this is ready to merge.
* Round out a few corners; fix some bugs
* Appease linter
* Try to fix linter again
* See if this works
* Try again
* See what actually fixed it
* See if allow list is necessary for replace in go.mod
* Ok fine just move it into place
* Refine retrieval keys a bit
* One more test
This is useful if a My Timeline subfolder is (sort-of) implicitly created for the user, and the user doesn't realize that is where their timeline is. They should be able to select the same folder to open the timeline as they did to create it.
- Hopefully (!?) fixed map element sizing bug on page load
- Hopefully (!?) fixed bug where polyline layers wouldn't render sometimes
- Added time labels between points
- Made marker tooltips/popups more informative, though they still require lots of work
- Made lines slightly more legible
I suspect there are still some weird/sporadic bugs in the map page... but it's harder to find them now. Not sure if good or bad, haha.
* Add search and filter functionality to conversations page
- Add Search Conversations and Clear Filters buttons with icons
- Implement text search support for conversation messages
- Add event handlers for search button with loading feedback
- Add clear filters functionality to reset all filter inputs
- Support Enter key to trigger search from text input
* Remove unnecessary submit button
---------
Co-authored-by: Matthew Holt <mholt@users.noreply.github.com>
* Do not try to open a browser in headless mode
When running timelinize serve without a display/desktop, you get a
harmless error in the server log output:
Error: no DISPLAY environment variable specified
This comes from xdg-open trying to open the server URL.
* Move log
---------
Co-authored-by: Matthew Holt <mholt@users.noreply.github.com>
* Revise location processing and place entities
- New, more dynamic, recursive clustering algorithm
- Place entities are globally unique by name
- Higher spatial tolerance for coordinate attributes if entity name is the same (i.e. don't insert new attribute row for coordinate if it's sort of close to another row for that attribute -- but if name is different, then points have to be closer to not insert new attribute row)
There is still a bug where clustering is too aggressive on some data. Looking into it...
* Fix overly aggressive clustering
(...lots of commits that fixed the CI environment which changed things without warning...)
* lint: bump golangci-lint version
- Bumps the version of golangci-lint that's used in the Github Action to be the most recent version (as installed with eg. `brew install golangci-lint` — v2.1.6)
- Migrates the `.golangci.toml` file, and manually moves the comments over
- `errchkjson` appears to work now, so added that back into the linter (the `forbidigo` and `goheader` linters I've left commented out)
* lint: remove checkers we don't like
Removes two static checkers that cause code changes we don't like.
* lint: remove old lint declaration
apparently `gosimple` isn't available any more, so I've removed its `nolint` declaration here.
* lint: swap location of `nolint:goconst`
This _seems_ to be an unstable declaration, because of he parallel & undeterministic nature of the linter. If this keeps causing trouble we can either remove the goconst linter, or change _both_ of these lines to hold `//nolint:goconst,nolintlint`.
* feat: add dockerfile for dev environments
* feat: setup dev environment using dev containers
* fix: delete docker-compose-dev.yaml as well
* feat: add instructions for running project inside dev container
* See how wrong the AI is
* Man... the AI is just straight-up lying to me now
* Screw it
* grumble
* sadfsfd
* asdfsdf
* See what's actually necessary
* Test test
* Still pruning
* This better not work
* Hopefully final cleanup