I honestly don’t know. The only advice I’d have for the layman would be “just don’t do this”, but I understand that’s little more than an invitation to be ignored.
I honestly don’t know. The only advice I’d have for the layman would be “just don’t do this”, but I understand that’s little more than an invitation to be ignored.
Running strange software grabbed from unknown sources will never not be a risky proposition.
Uploading the .exe you just grabbed to virustotal and getting the all clear can indicate two very different things: It’s either actually safe, or it hasn’t yet been detected as malware.
You should expect that malware writers had already uploaded some variant of their work to virustotal before seeding it to ensure maximum impact.
Getting happy results from virustotal could simply mean the malware author simply tweaked their work until they saw those same results.
Notice I said “yet” above. Malware tends to eventually get flagged as such, even when it has a headstart of not being recognized correctly.
You can use that to somewhat lower the odds of getting infected, by waiting. Don’t grab the latest crack that just dropped for the hottest game or whatever.
Wait a few weeks. Let other people get infected first and have antiviruses DBs recognize a new malware. Then maybe give it a shot.
And of course, the notion that keygens will often be flagged as “bad” software by unhelpful antivirus just further muddies the waters since it teaches you to ignore or altogether disable your antivirus in one of the most risky situation you’ll put yourself into.
Let’s be clear: There’s nothing safe about any of this, and if you do this on a computer that has access to anything you wouldn’t want to lose, you are living dangerously indeed.
Several times now, I’ve sent people I knew links to articles that looked perfectly fine to me, but turned out to be unusable ad-ridden garbage to them.
Since then, I try to remember to disable uBlock Origin to check what they’ll actually see before I share any links.
There are a near infinity of those out there, many of which just grab other scanlation groups’ output and slap their ads on top of it.
Mangadex is generally my happy place, but you’ll have to wander out and about for various specific mangas.
Several of the groups that post on Mangadex also have their own website and you may find more stuff there.
For example right now I’ve landed on asurascans.com, which has a bunch of Korean and Chinese long strips, with generally good quality translations.
The usual sticky points with all those manga sites is the ability to track where you are in a series and continue where you left off when new chapters are posted.
Even Mangadex struggles with that, their “Updates” page is the closest thing they have to doing that and it’s still not very good.
If you’re going to stick to one site for any length of time, and you happen to be comfortable with userscripts, Id’ suggest you head over to greasyfork.org, search for the manga domain you’re using, and look for scripts that might improve your binging experience there.
That sounds like an improbable attempt to leverage the notion that minors can’t enter into a legally binding contract into a loophole to get anything for free by simply having your kid order it.
I have a small userscript/style tweak to remove all input fields from reddit, so I’m still allowing myself to browse reddit in read-only mode on desktop, with no mobile access.
It’s a gentle way to wean myself off. I’m still waiting for my GDPR data dump anyway, so I need to check reddit fairly regularly to be able to grab it when/if it arrives.
Let that trashcan in.
You can list every man page installed on your system with
man -k .
, or justapropos .
But that’s a lot of random junk. If you only want “executable programs or shell commands”, only grab man pages in section 1 with a
apropos -s 1 .
You can get the path of a man page by using
whereis -m pwd
(replacepwd
with your page name.)You can convert a man page to html with
man2html
(may requireapt get man2html
or whatever equivalent applies to your distro.)That tool adds a couple of useless lines at the beginning of each file, so we’ll want to pipe its output into a
| tail +3
to get rid of them.Combine all of these together in a questionable incantation, and you might end up with something like this:
List every command in section 1, extract the id only. For each one, get a file path. For each id and file path (ignore the rest), convert to html and save it as a file named
$id.html
.It might take a little while to run, but then you could run
firefox .
or whatever and browse the resulting mess.Or keep tweaking all of this until it’s just right for you.