-
-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Docker version with pgsql fails on image update #4080
Comments
I'm not very educated in this area and may be misinterpreting what I'm reading, but this looks to me like a possible bug in the migration code. I can recover my container by starting it up again in default (localdb) configuration, then migrating to pgsql again. But if I've already provided pgsql settings that function (eg. I have migrated before) it fails in this "pre-migrate" step after image update. Someone on reddit suggested the weird IP could be an AT&T DNS server, which if so alleviates that part of the concern. |
This is a warning and does not affect the starting progress.
This one is the main reason why it failed to run, because it can't connect to your database. |
I fully agree that's the problem, or at least part of the problem. But why?
And, to me, this is the kicker -- if I follow these steps, memos also works with pgsql just fine - UNTIL the next image update, when I must repeat these steps again:
It's as if the migration of database schema from each version needs to be run by switching back to localdb first, which seems very odd. |
MYsql 数据库同样是在升级时失败 |
The MYsql database also failed during upgrade |
In case this helps identify the problem, I have a little more information: After a recent DSM update I had to restart my NAS (rare) and memos container refused to start, with the same "migrate" reason as above, even though I hadn't updated the memos container image this time -- it's already on Following the steps I mentioned in my comment above resolved the problem again. So the issue still seems (from a user point of view) to be related to startup migration code, but doesn't seem to always be due to an image update. |
MySQL also failed. |
@davidtavarez , I believe the MySQL failure on upgrade might relate to a different issue. Can you have a look at this issue and see if your scenario aligns with it #4127 |
Describe the bug
I'm running neosmemo/memos@latest on my Synology NAS with DSM 7.2.2 as part of a container project including postgres. It works great typically but every time I update the image it dies and needs to be completely set up again.
After the most recent update it won't start yet again, and this is what's in the log
2024/10/31 05:39:43 WARN failed to find migration history in pre-migrate error="dial tcp 143.244.220.150:5432: connect: connection timed out"
That IP address isn't mine; appears to be a DigitalOcean IP, That maybe has a simple explanation but certainly warrants one imo.
Steps to reproduce
Here's my compose file in case it's useful, i removed other containers that share the postgres container that work fine when they are updated.
The version of Memos you're using.
v0.23.0
Screenshots or additional context
memos
date,stream,content
2024/10/31 05:45:17,stderr,2024/10/31 05:45:17 ERROR failed to migrate error="dial tcp 143.244.220.150:5432: connect: connection timed out\nfailed to start transaction\ngithub.com/usememos/memos/store.(*Store).preMigrate\n\t/backend-build/store/migrator.go:140\ngithub.com/usememos/memos/store.(*Store).Migrate\n\t/backend-build/store/migrator.go:38\nmain.init.func1\n\t/backend-build/bin/memos/main.go:61\ngithub.com/spf13/cobra.(*Command).execute\n\t/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:989\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1117\ngithub.com/spf13/cobra.(*Command).Execute\n\t/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1041\nmain.main\n\t/backend-build/bin/memos/main.go:171\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:272\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1700\nfailed to pre-migrate\ngithub.com/usememos/memos/store.(*Store).Migrate\n\t/backend-build/store/migrator.go:39\nmain.init.func1\n\t/backend-build/bin/memos/main.go:61\ngithub.com/spf13/cobra.(*Command).execute\n\t/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:989\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1117\ngithub.com/spf13/cobra.(*Command).Execute\n\t/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1041\nmain.main\n\t/backend-build/bin/memos/main.go:171\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:272\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1700"
2024/10/31 05:43:10,stderr,2024/10/31 05:43:10 WARN failed to find migration history in pre-migrate error="dial tcp 143.244.220.150:5432: connect: connection timed out"
2024/10/31 05:39:43,stderr,2024/10/31 05:39:43 WARN failed to find migration history in pre-migrate error="dial tcp 143.244.220.150:5432: connect: connection timed out"
The text was updated successfully, but these errors were encountered: