• abhibeckert@lemmy.world
    link
    fedilink
    arrow-up
    27
    arrow-down
    3
    ·
    edit-2
    1 year ago

    I love the comparison of string length of the same UTF-8 string in four programming languages (only the last one is correct, by the way):

    Python 3:

    len(“🤦🏼‍♂️”)

    5

    JavaScript / Java / C#:

    “🤦🏼‍♂️”.length

    7

    Rust:

    println!(“{}”, “🤦🏼‍♂️”.len());

    17

    Swift:

    print(“🤦🏼‍♂️”.count)

    1

    • Walnut356@programming.dev
      link
      fedilink
      arrow-up
      44
      ·
      edit-2
      1 year ago

      That depends on your definition of correct lmao. Rust explicitly counts utf-8 scalar values, because that’s the length of the raw bytes contained in the string. There are many times where that value is more useful than the grapheme count.

      • Black616Angel@feddit.de
        link
        fedilink
        arrow-up
        16
        ·
        1 year ago

        And rust also has the “🤦”.chars().count() which returns 1.

        I would rather argue that rust should not have a simple len function for strings, but since str is only a byte slice it works that way.

        Also also the len function clearly states:

        This length is in bytes, not chars or graphemes. In other words, it might not be what a human considers the length of the string.

        • lemmyvore@feddit.nl
          link
          fedilink
          English
          arrow-up
          11
          ·
          1 year ago

          None of these languages should have generic len() or size() for strings, come to think of it. It should always be something explicit like bytes() or chars() or graphemes(). But they’re there for legacy reasons.

        • Knusper@feddit.de
          link
          fedilink
          arrow-up
          10
          ·
          1 year ago

          That Rust function returns the number of codepoints, not the number of graphemes, which is rarely useful. You need to use a facepalm emoji with skin color modifiers to see the difference.

          The way to get a proper grapheme count in Rust is e.g. via this library: https://crates.io/crates/unicode-segmentation

          • Djehngo@lemmy.world
            link
            fedilink
            arrow-up
            9
            ·
            1 year ago

            Makes sense, the code-points split is stable; meaning it’s fine to put in the standard library, the grapheme split changes every year so the volatility is probably better off in a crate.

            • Knusper@feddit.de
              link
              fedilink
              arrow-up
              7
              ·
              1 year ago

              Yeah, although having now seen two commenters with relatively high confidence claiming that counting codepoints ought be enough…

              …and me almost having been the third such commenter, had I not decided to read the article first…

              …I’m starting to feel more and more like the stdlib should force you through all kinds of hoops to get anything resembling a size of a string, so that you gladly search for a library.

              Like, I’ve worked with decoding strings quite a bit in the past, I felt like I had an above average understanding of Unicode as a result. And I was still only vaguely aware of graphemes.

              • Turun@feddit.de
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                For what it’s worth, the documentation is very very clear on what these methods return. It explicitly redirects you to crates.io for splitting into grapheme clusters. It would be much better to have it in std, but I understand the argument that Std should only contain stable stuff.

                As a systems programming language the .len() method should return the byte count IMO.

                • Knusper@feddit.de
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  1 year ago

                  The problem is when you think you know stuff, but you don’t. I knew that counting bytes doesn’t work, but thought the number of codepoints was what I want. And then knowing that Rust uses UTF-8 internally, it’s logical that .chars().count() gives the number of codepoints. No need to read documentation, if you’re so smart. 🙃

                  It does give you the correct length in quite a lot of cases, too. Even the byte length looks correct for ASCII characters.

                  So, yeah, this would require a lot more consideration whether it’s worth it, but I’m mostly thinking there’d be no .len() on the String type itself, and instead to get the byte count, you’d have to do .as_bytes().len().

      • Knusper@feddit.de
        link
        fedilink
        arrow-up
        7
        ·
        1 year ago

        Yeah, and as much as I understand the article saying there should be an easily accessible method for grapheme count, it’s also kind of mad to put something like this into a stdlib.

        Its behaviour will break with each new Unicode standard. And you’d have to upgrade the whole stdlib to keep up-to-date with the newest Unicode standards.

        • Treeniks@lemmy.ml
          link
          fedilink
          arrow-up
          4
          ·
          edit-2
          1 year ago

          The way UTF-8 works is fixed though, isn’t it? A new Unicode standard should not change that, so as long as the string is UTF-8 encoded, you can determine the character count without needing to have the latest Unicode standard.

          Plus in Rust, you can instead use .chars().count() as Rust’s char type is UTF-8 Unicode encoded, thus strings are as well.

          turns out one should read the article before commenting

          • Knusper@feddit.de
            link
            fedilink
            arrow-up
            6
            ·
            1 year ago

            No offense, but did you read the article?

            You should at least read the section “Wouldn’t UTF-32 be easier for everything?” and the following two sections for the context here.

            So, everything you’ve said is correct, but it’s irrelevant for the grapheme count.
            And you should pretty much never need to know the number of codepoints.

            • Treeniks@lemmy.ml
              link
              fedilink
              arrow-up
              3
              ·
              1 year ago

              yup, my bad. Frankly I thought grapheme meant something else, rather stupid of me. I think I understand the issue now and agree with you.

              • Knusper@feddit.de
                link
                fedilink
                arrow-up
                3
                ·
                1 year ago

                No worries, I almost commented here without reading the article, too, and did not really know what graphemes are beforehand either. 🫠

        • ono@lemmy.ca
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          It might make more sense to expose a standard library API for unicode data provided by (and updated with) the operating system. Something like the time zone database.

  • atheken@programming.dev
    link
    fedilink
    arrow-up
    16
    ·
    1 year ago

    Unicode is thoroughly underrated.

    UTF-8, doubly so. One of the amazing/clever things they did was to build off of ASCII as a subset by taking advantage of the extra bit to stay backwards compatible, which is a lesson we should all learn when evolving systems with users (your chances of success are much better if you extend than to rewrite).

    On the other hand, having dealt with UTF-7 (a very “special” email encoding), it takes a certain kind of nerd to really appreciate the nuances of encodings.

  • Obscerno@lemm.ee
    link
    fedilink
    arrow-up
    15
    ·
    1 year ago

    Man, Unicode is one of those things that is both brilliant and absolutely absurd. There is so much complexity to language and making one system to rule them all ends up involving so many compromises. Unicode has metadata for each character and algorithms dealing with normalization and capitalization and sorting. With human language being as varied as it is, these algorithms can have really wacky results. Another good article on it is https://eev.ee/blog/2015/09/12/dark-corners-of-unicode/

    And if you want to RENDER text, oh boy. Look at this: https://faultlore.com/blah/text-hates-you/

  • Knusper@feddit.de
    link
    fedilink
    arrow-up
    16
    arrow-down
    1
    ·
    1 year ago

    They believed 65,536 characters would be enough for all human languages.

    Gotta love these kind of misjudgements. Obviously, they were pushing against pretty hard size restrictions back then, but at the same time, they did have the explicit goal of fitting in all languages and if you just look at the Asian languages, it should be pretty clear that it’s not a lot at all…

  • Kevin Lyda@programming.dev
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 year ago

    The mouse pointer background is kinda a dick move. Good article. but the background is annoying for tired old eyes - which I assume are a target demographic for that article.

  • zquestz@lemm.ee
    link
    fedilink
    arrow-up
    11
    ·
    1 year ago

    Was actually a great read. I didn’t realize there were so many ways to encode the same character. TIL.

  • lucas@startrek.website
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    3
    ·
    1 year ago

    currency symbols other than the $ (kind of tells you who invented computers, doesn’t it?)

    Who wants to tell the author that not everything was invented in the US? (And computers certainly weren’t)

    • Deebster@lemmyrs.org
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      1 year ago

      The stupid thing is, all the author had to do was write “kind of tells you who invented ASCII” and he’d have been 100% right in his logic and history.

    • SnowdenHeroOfOurTime@unilem.org
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      1 year ago

      Where were computers invented in your mind? You could define computer multiple ways but some of the early things we called computers were indeed invented in the US, at MIT in at least one case.

      • lucas@startrek.website
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        1 year ago

        Well, it’s not really clear-cut, which is part of my point, but probably the 2 most significant people I could think of would be Babbage and Turing, both of whom were English. Definitely could make arguments about what is or isn’t considered a ‘computer’, to the point where it’s fuzzy, but regardless of how you look at it, ‘computers were invented in America’ is rather a stretch.

        • SnowdenHeroOfOurTime@unilem.org
          link
          fedilink
          arrow-up
          1
          arrow-down
          4
          ·
          1 year ago

          ‘computers were invented in America’ is rather a stretch.

          Which is why no one said that. I read most of the article and I’m still not sure what you were annoyed about. I didn’t see anything US-centric, or even anglocentric really.

          • lucas@startrek.website
            link
            fedilink
            English
            arrow-up
            6
            ·
            1 year ago

            To say I’m annoyed would be very much overstating it, just a (very minor) eye-roll at one small line in a generally very good article. Just the bit quoted:

            currency symbols other than the $ (kind of tells you who invented computers, doesn’t it?)

            So they could also be attributing it to some other country that uses $ for their currency, which is a few, but it seems most likely to be suggesting USD.

      • Deebster@lemmyrs.org
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        I think the author’s intended implication is absolutely that it’s a dollar because the USA invented the computer. The two problems I have is that:

        1. He’s talking about the American Standard Code for Information Interchange, not computers at that point
        2. Brits or Germans invented the computer (although I can’t deny that most of today’s commercial computers trace back to the US)

        It’s just a lazy bit of thinking in an otherwise excellent and internationally-minded article and so it stuck out to me too.

  • mindbleach@sh.itjust.works
    link
    fedilink
    arrow-up
    12
    arrow-down
    2
    ·
    edit-2
    1 year ago

    I’m still sour about text having color. Yeah I know little icons peppered forums. That’s why people liked reddit! It got rid of that shit! Now it’s part of the universal standard? Not just the ability to draw a turd on someone’s monitor, but to have it be colored-in brown? The hell with that. You wanna have animated GIFs next? Let someone put their username in marquee? Or like right-alignment, make rainbow signatures a free gimmick that text engines have to live with.

    Meanwhile the alphabet of upside-down or small-caps letters are still incomplete.

      • gerryflap@feddit.nl
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        As a developer, I feel absolute pain for the people who had to convert these. There’s quite some edge cases and sensitive topics to dodge here, and doing something wrong might piss people off. They must’ve had some lengthy meetings about a few emoji.

    • JackbyDev@programming.dev
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      1 year ago

      𝖄𝖔𝖚 𝖘𝖔𝖗𝖙 𝖔𝖋 𝖊𝖓𝖉 𝖚𝖕 𝖍𝖆𝖛𝖎𝖓𝖌 𝖋𝖔𝖓𝖙𝖘 𝖊𝖒𝖇𝖊𝖉𝖉𝖊𝖉 𝖎𝖓 𝖙𝖍𝖊 𝖊𝖓𝖈𝖔𝖉𝖎𝖓𝖌. 𝕴 𝖚𝖓𝖉𝖊𝖗𝖘𝖙𝖆𝖓𝖉 𝖜𝖍𝖞 𝖎𝖙 𝖍𝖆𝖕𝖕𝖊𝖓𝖘, 𝕴’𝖒 𝖓𝖔𝖙 𝖘𝖆𝖞𝖎𝖓𝖌 𝖎𝖙 𝖘𝖍𝖔𝖚𝖑𝖉𝖓’𝖙, 𝖇𝖚𝖙 𝖎𝖙’𝖘 𝖘𝖙𝖎𝖑𝖑 𝖆 𝖜𝖊𝖎𝖗𝖉 𝖘𝖎𝖉𝖊 𝖊𝖋𝖋𝖊𝖈𝖙.

      As normal letters in case your screen cannot render it.

      You sort of end up having fonts embedded in the encoding. I understand why it happens, I’m not saying it shouldn’t, but it’s still a weird side effect.

      • mindbleach@sh.itjust.works
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        And it risks sites like this entering an arms race for attention-grabbing bullshit, where every post tries not to look like plain text. This didn’t really happen to reddit because the old guard (hello) were curmudgeons. Happened to Craigslist and eBay, though, where the attention-whore behavior is directly monetized.

  • onlinepersona@programming.dev
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 year ago

    Because strings are such a huge problem nowadays, every single software developer needs to know the internals of them. I can’t even stress it enough, strings are such a burden nowadays that if you don’t know how to encode and decode one, you’re beyond fucked. It’ll make programming so difficult - no even worse, nigh impossible! Only those who know about unicode will be able to write any meaningful code.

  • robinm@programming.dev
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    1 year ago

    I do understant why old unicode versions re-used “i” and “I” for turkish lowercase dotted i and turkish uppercase dotless I, but I don’t understand why more recent version have not introduce two new characters that looks exactly the same but who don’t require locale-dependant knowlege to do something as basic as “to lowercase”.

  • Phoenixz@lemmy.ca
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    1 year ago

    Just give me plain UTF32 with ~@4 billion code points, that really should be enough for any symbol ee can come up with. Give everything it’s own code point, no bullshit with combined glyphs that make text processing a nightmare. I need to be able to do a strlen either on byte length or amount of characters without the CPU spendings minute to count each individual character.

    I think Unicode started as a great idea and the kind of blubbered into aimless “everybody kinda does what everyone wants” territory. Unicode is for humans, sure, but we shouldn’t forget that computers actually have to do the work