ReplaceFile
exists to get everyone else’s semantics though?
ReplaceFile
exists to get everyone else’s semantics though?
Related, note that division is much slower than multiplication.
Instead of:
n / d
see if you can refactor it to:
n * (1.0/d)
where that inverse can then be hoisted out of loops.
This is about the one thing where SQL is a badly designed language, and you should use a frontend that forces you to write your queries in the order (table, filter, columns) for consistency.
UPDATE table_name WHERE y = $3 SET w = $1, x = $2, z = $4 RETURNING *
FROM table_name SELECT w, x, y, z
Obviously the actual programs are trivial. The question is, how are the tools supposed to be used?
So you say to use deno
? Out of all the tutorials I found telling me what tools to use, that wasn’t one of them (I really thought this “typescript” package would be the thing I was supposed to use; I just checked again on a hot cache and it was 1.7 seconds real time, 4.5 seconds cpu time, only 2.9 seconds if I pin everything to a single core). And I swear I just saw this week, people saying “seriously, don’t use deno”. It also doesn’t seem to address the browser use case at all though.
In other languages I know, I know how to write 4 files (the fib library and 3 frontends), and compile and/or execute them separately. I know how to shove all of them into a single blob with multiple entry points selected dynamically. I know how to shove just one frontend with the library into a single executable. I know how to separately compile the library and each frontend, producing 4 separate artifacts, with the library being dynamically replaceable. I even know how to leave them as loose files and execute them directly (barring things like C). I can choose between these things all in a single codebase, since there are no hard-coded project filenames.
I learned these things because I knew I wanted the ability from previous languages I’d learned, and very quickly found how the new language’s tools supported that.
I don’t have that for TS (JS itself seems to be fine, since I have yet to actually need all the polyfill spam). And every time I try to find an answer, I get something that contradicts everything I read before.
That is why I say that TS is a hopelessly immature ecosystem.
I’m not concerned about the Microsoft’s involvement. TypeScript shows an immature tooling ecosystem even on its own merits.
I posted some of my concerns earlier, along with a basic problem challenge (that I can easily do in many other languages) that nobody managed to solve: https://programming.dev/comment/2734178
The problem with mailing lists is that no mailing list provider ever supports “subscribe to this message tree”.
As a result, either you get constant spam, or you don’t get half the replies.
True, speed does matter somewhat. But even if xterm
isn’t the ultimate in speed, it’s pretty good. Starts up instantly (the benefit of no extraneous libraries); the worst question is if it’s occasionally limited to the framerate for certain output patterns, and if there’s a clog you can always minimize it for a moment.
Speed is far from the only thing that matters in terminal emulators though. Correctness is critical.
The only terminals in which I have any confidence of correctness are xterm
and pangoterm
. And I suppose technically the BEL-for-ST extension is incorrect even there, but we have to live with that and a workaround is available.
A lot of terminal emulators end up hard-coding a handful of common sequences, and fail to correctly ignore sequences they don’t implement. And worse, many go on to implement sequences that cannot be correctly handled.
One simple example that usually fails: \e!!F
. More nasty, however, are the ones that ignore intermediaries and execute some unrelated command instead.
I can’t be bothered to pick apart specific terminals anymore. Most don’t even know what an IR is.
I guess I forgot to mention the other implicit difference in concerns:
When you are a game, you can reasonably assume: I have the user’s full focus and can take all the computing resources of their device, barring a few background apps.
When you are an application, the user will almost always have several other applications running to a meaningful degree, and those eat into available resources (often in a difficult-to-measure way). Unfortunately this rarely gets tested.
I’m not saying you can’t write an app using a game toolkit or vice versa, but you have to be aware of the differences and figure out how to configure it correctly for your use case.
(though actually - some purely-turn-based games that do nothing until user enters input do just fine on app toolkits. But the existence of such games means that game toolkits almost always support some way of supporting the app paradigm. By contrast, app toolkits often lack ready support for continuous game paradigms … unless you use APIs designed for video playback, often involving creating a separate child “window”. Actual video playback is really hard; even the makers of dedicated video-playing programs mess it up.)
The problem with XCB is that it’s designed to be efficient, not easy. If you’re avoiding toolkits for some reason, “so what if I block the world” may be a reasonable tradeoff.
There’s tends to be one major difference between games and non-game applications, so toolkits designed for one are often quite unsuitable for the other.
A game generally performs logic to paint the whole window, every frame, with at most some framerate-limiting in “paused” states. This burns power but is steady and often tries hard to reduce latency.
An application generally tries to paint as little of the window as possible, as rarely as possible. Reducing video bandwidth means using a lot less power, but can involve variable loads so sometimes latency gets pushed down to “it would be nice”.
Notably, the implications of the 4-way choice between {tearing, vsync, double-buffer, triple-buffer} looks very different between those two - and so does the question of “how do we use the GPU”?
1, Don’t target X11 specifically these days. Yes a lot of people still use it or at least support it in a backward-compatible manner, but Wayland is only increasing.
2, Don’t fear the use of libraries. SDL and GTK, being C-based, should both be feasible from assembly; at most you might want to build a C program that dumps constants (if -dM
doesn’t suffice) and struct offsets (if you don’t want to hard-code them).
Some languages don’t even support linking at all. Interpreted languages often dispatch everything by name without any relocations, which is obviously horrible. And some compiled languages only support translating the whole program (or at least, whole binary - looking at you, Rust!) at once. Do note that “static linking” has shades of meaning: it applies to “link multiple objects into a binary”, but often that it excluded from the discussion in favor of just “use a .a instead of a .so”.
Dynamic linking supports much faster development cycle than static linking (which is faster than whole-binary-at-once), at the cost of slightly slower runtime (but the location of that slowness can be controlled, if you actually care, and can easily be kept out of hot paths). It is of particularly high value for security updates, but we all known most developers don’t care about security so I’m talking about annoyance instead. Some realistic numbers here: dynamic linking might be “rebuild in 0.3 seconds” vs static linking “rebuild in 3 seconds” vs no linking “rebuild in 30 seconds”.
Dynamic linking is generally more reliable against long-term system changes. For example, it is impossible to run old statically-linked versions of bash 3.2 anymore on a modern distro (something about an incompatible locale format?), whereas the dynamically linked versions work just fine (assuming the libraries are installed, which is a reasonable assumption). Keep in mind that “just run everything in a container” isn’t a solution because somebody has to maintain the distro inside the container.
Unfortunately, a lot of programmers lack basic competence and therefore have trouble setting up dynamic linking. If you really need frobbing, there’s nothing wrong with RPATH if you’re not setuid or similar (and even if you are, absolute root-owned paths are safe - a reasonable restriction since setuid will require more than just extracting a tarball anyway).
Even if you do use static linking, you should NEVER statically link to libc, and probably not to libstdc++ either. There are just too many things that can go wrong when you given up on the notion of “single source of truth”. If you actually read the man pages for the tools you’re using this is very easy to do, but a lack of such basic abilities is common among proponents of static linking.
Again, keep in mind that “just run everything in a container” isn’t a solution because somebody has to maintain the distro inside the container.
The big question these days should not be “static or dynamic linking” but “dynamic linking with or without semantic interposition?” Apple’s broken “two level namespaces” is closely related but also prevents symbol migration, and is really aimed at people who forgot to use -fvisibility=hidden
.
It’s solving (and facing) some very interesting problems at a technical level …
but I can’t get over the dumb decision for how IO is done. It’s $CURRENTYEAR; we have global constructors even if your platform really needs them (hint: it probably doesn’t).
Stop reinventing the wheel.
Major translation systems like gettext (especially the GNU variant) have decades of tooling built up for “merging” and all sorts of other operations.
Even if you don’t want to use their binary format at runtime, their tooling is still worth it.
Write-up is highly Windows-centric (though not irrelevant elsewhere).
One thing that is regretfully ignored in discussions of async, tasks, green threads, etc. is that there is no support/consideration for native (reliable/efficient) thread-local variables. If you’re lucky you’ll get a warning about “don’t use them”.
For an extension like this - unlike most prior extensions - you’re best off with essentially an entirely separately compiled copy of the program/library. So IFUNC
is a poor fit, even with peer optimization.
Two of the most expensive things a shell does are call fork
and call execve
for an external program. pwd
is a builtin (at least for bash) but the former still applies. $PWD
exists even if you don’t want that shortening; just like your backticks be sure to quote it once so it doesn’t get expanded when assigning to PS1.
In general, for most things you might want to do, you can arrange for variables to be set ahead of time and simply expanded at use time, rather than recalculating them every time. For example, you can hook cd
/pushd
/popd
to get an actually-fast git
prompt. Rather than var=$(some_function)
you should have some_function
output directly to a variable (possibly hard-coded - REPLY
is semi-common; you can move the value later); printf -v
is often useful. Indirection should almost always be avoided (unless you do the indirect-unset bash-specific hack or don’t have any locals) due to shadowing problems (you have to hard-code variable name assumptions anyway so you might as well be explicit).
That doesn’t seem sensible. Moving the cursor will confuse bash and you can get the same effect by just omitting the last \n
.
Note that bash 5.0, but not earlier or later versions, is buggy with multiline prompts even if they’re correct.
Your colors should use 39 (or 49) for reset.
Avoid doing external commands in subshells when there’s a perfectly good prompt-expansion string that works.
You seem to be generating several unnecessary blank lines, though I haven’t analyzed them in depth; remember that doing them conditionally is an option, like I do:
#PS1 built up incrementally before this, including things like setting TTY title for appropriate terminals
PS0='vvv \D{%F %T%z}\n'
PS1='^^^ \D{%F %T%z}\n'"$PS1"
prompt-command-exit-nonzero()
{
# TODO map signal names and <sysexits.h> and 126/127 errors?
# (128 also appears in some weird job-control cases; there are also
# numerous cases where $? is not in $PIPESTATUS)
# This has to come first since $? will be invalidated.
# It's also one of the few cases where `*` is non-buggy for an array.
local e=$? pipestatus="${PIPESTATUS[*]}"
# Fixup newline. Note that interactive shells specifically use stderr
# for the prompt, not stdin, stdout, or /dev/tty
printf '\e[93;41m%%\e[39;49m%'"$((COLUMNS-1))"'s\r' >&2
# if e or any pipestatus is nonzero
if [[ -n "${e/0}${pipestatus//[ 0]}" ]]
then
if [[ "$pipestatus" != "$e" ]]
then
local pipestatus_no_SIGPIPE="${pipestatus//141 /}"
local color=41
if [[ -z "${pipestatus_no_SIGPIPE//[ 0]}" ]]
then
color=43
fi
printf '\e[%smexit_status: %s (%s)\e[49m\n' "$color" "$e" "${pipestatus// / | }" >&2
else
printf '\e[41mexit_status: %s\e[49m\n' "$e" >&2
fi
fi
}
PROMPT_COMMAND='prompt-command-exit-nonzero'
From my experience, Cinnamon is definitely highly immature compared to KDE. Very poor support for virtual desktops is the thing that jumped out at me most. There were also some problems regarding shortcuts and/or keyboard layout I think, and probably others, but I only played with it for a couple weeks while limited to LiveCD.