The last few days have been incredible with the amount of contributions pouring in. We had 13 first-time contributors in just 3 days, and the release before that was 14 contributors.
The last few days have been incredible with the amount of contributions pouring in. We had 13 first-time contributors in just 3 days, and the release before that was 14 contributors.
This is improved in 0.0.33. Now it’s just tap anywhere on the comment that isn’t also a button. There’s a setting to configure if you want that comment to collapse or just the replies.
As a workaround for now, you could consider using a browser like Firefox focus. It’s basically an always-incognito browser.
I’m currently working on making it so that fediverse links opened in Jerboa will open in Jerboa. After that I think we could see about how to support that “add more links” setting in the UI.
We just released a big new update to Jerboa that adds a lot of much needed features and polish. We had 14 new contributors too!
Thanks to everyone who has opened issues or submitted PRs! 14 new contributors this round!
There’s an open PR that’ll fix the font size issue. I’m using it now and it’s great. I’m also personally working on trying to add my personal must-have UI options from Boost.
I imagine it’ll be possible in the near future to improve the accuracy of technical AI content somewhat easily. It’d go something along these lines: have an LLM generate a candidate response, then have a second LLM capable of validating that response. The validator would have access to real references it can use to ensure some form of correctness, ie a python response could be plugged into a python interpreter to make sure it, to some extent, does what it is proported to do. The validator then decides the output is most likely correct, or generates some sort of response to ask the first LLM to revise until it passes validation. This wouldn’t catch 100% of errors, but a process like this could significantly reduce the frequency of hallucinations, for example.
This is due to poor error handling in the API client code, triggered by the server returning some sort of error. There’s an open issue but it hasn’t been taken up yet.