Apparently there’s a post in my stream that’s crashing the pod. When I try to view my stream in either mobile or desktop view, I get a 500 error. I can view aspects, profile, and the admin pages, and create a post in mobile view, so I know other components are working; it’s a bug in the stream rendering code, probably triggered by a bad post (likely a poll).
If somebody can help me track down which post it is, I can send the offending markdown and we can see if we can create a test case.
Nah, your pod is trying to render a mention mentioning a person that’s, for some reason, not known to your pod (which should never happen™), and because the code doesn’t expect the person not to be there, it crashes.
As your multi streams fails, the easiest way to find the offending post is to iterate over your stream, try to render the message for all posts, and see where it crashes. In a RAILS_ENV=production bin/rails c, you could use
Stream::Multi.new(User.find_by(username: "YOUR_USERNAME")).posts.each do |post|
begin
post.message.markdownified
rescue
puts "message.markdownified failed for #{post.guid}"
end
end; true
which prints you the GUID of posts that fail to render their message. You can then look at the post by using Post.find_by(guid: "abc").
As far as I am concerned, this isn’t even a problem, because we do fetch the Person if it’s not there, and it would be interesting why that failed. Please share the post’s GUID, as well as the author’s diaspora handle. If it’s a private post, feel free to use Discourse’s private message feature to share these details in private.
That’s… certainly interesting. That’s a reshare from a 11 months old post… from me. There are a couple of mentions in there, including one that’s now technically broken (mrzyx@social.mrzyx.de, that account has been migrated to a new diaspora handle using the experimental migration support), but no other pod fails to display the post (Geraspora, Nerdpol, JoinDiaspora, Sechat), so this is something on your side.
The result of the first command is ["senya@socializer.cc"]; the result of the second is #<AccountMigration id: 1, old_person_id: 3099, new_person_id: 94142, completed_at: "2019-05-20 02:15:31">
Ah, so that’s interesting. My first thought was “oh no, the migration broke things”, even though that makes no real sense, as that should not cause failures, but it should only result in the mention being rendered as a mailto: link (not that this makes any sense semantically, but at least it’s not crashing), but that’s not even the case, as your pod is aware of said migration.
senya@socializer.cc is missing, and that’s interesting. Since it still is in this posts .mentioned_people array, the Person once was known to your pod - but no longer is. If the Person was never there, the entry would not have been in the mentioned_people list in the first place.
It’s worth noting that socializer.cc is offline since 2019-05-08 15:37:46, so re-fetching this is not an option.
Knowing you and your exciting adventures through the diaspora* database in the quest to optimize things, did you ever drop permadown pods and their Persons?
Yeah, it may be the result of some collateral damage from that. Fsck. My bad.
Thanks for the help; I need to find some better ways to reduce the load on my pod. The resources it consumes is becoming hard to sustain. It’s part of the reason I’ve been working on my own designs for an AP node; for the few users I have it shouldn’t take that much resources to get a 3 second stream load.
Still, I’ve no one to blame but me. I may be able to recover the records from backups if I try hard enough. Thanks for your help!
Well, at least we now have a practical example of “how things can go horribly wrong in ways that even someone with experience didn’t see coming”. Glad we could work that out, at least.
I am currently (a couple of hours a week, as I have other things to do - and I have to work…) investigating a couple of different approaches to reduce the load without removing any data. It’ll take a bit until that work yields any releasable results, but eventually, we’ll be there. There absolutely is a way to make diaspora* more efficient without sacrificing old records.
I wish I’d be faster or could spend more time, but oh well.