• 4 Posts
  • 55 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle

  • This is true, but…

    Moore’s Law can be thought of as an observation about the exponential growth of technology power per $ over time. So yeah, not Moore’s Law, but something like it that ordinary people can see evolving right in front of their eyes.

    So a $40 Raspberry Pi today runs benchmarks 4.76 times faster than a multimillion dollar Cray supercomputer from 1978. Is that Moore’s Law? No, but the bang/$ curve probably looks similar to it over those 30 years.

    You can see a similar curve when you look at data transmission speed and volume per $ over the same time span.

    And then for storage. Going from 5 1/4" floppy disks, or effing cassette drives, back on the earliest home computers. Or the round tapes we used to cart around when I started working in the 80’s which had a capacity of around 64KB. To micro SD cards with multi-terabyte capacity today.

    Same curve.

    Does anybody care whether the storage is a tape, or a platter, or 8 platters, or circuitry? Not for this purpose.

    The implication of, “That’s not Moore’s Law”, is that the observation isn’t valid. Which is BS. Everyone understands that that the true wonderment is how your Bang/$ goes up exponentially over time.

    Even if you’re technical you have to understand that this factor drives the applications.

    Why aren’t we all still walking around with Sony Walkmans? Because small, cheap hard drives enabled the iPod. Why aren’t we all still walking around with iPods? Because cheap data volume and speed enabled streaming services.

    While none of this involves counting transistors per inch on a chip, it’s actually more important/interesting than Moore’s Law. Because it speaks to how to the power of the technology available for everyday uses is exploding over time.




  • Back in the 70’s and 80’s there were “Travesty Generators”. You pushed some text into them and they developed linguistic rules based on probabilities determined by the text. Then you could have them generate brand new text randomly created by applying the linguistic rules developed from the source text.

    Surprisingly, they would generate “brand new” words that weren’t in the original text, but were real words. And the output matched stylistically to the input text. So you put in Shakespeare and you got out something that sounded like Shakespeare. You get the idea.

    I built one and tried running some TS Eliot through it, because stuff is, IMHO, close to gibberish to begin with. The results were disappointing. Basically because it couldn’t get any more gibberishy that the source.

    I strongly suspect that the same would happen with Trump’s gibberish. There used to be a bunch of Travesty Generators online, and you could probably try one out to see.





  • HamsterRage@lemmy.catoLemmy Shitpost@lemmy.worldNew tech discovered
    link
    fedilink
    arrow-up
    31
    arrow-down
    3
    ·
    3 months ago

    I think it’s a bit more than that. I think that the idea is that you simplify the problem so that the rubber duck could understand it. Or at least reformulate it in order to communicate it clearly.

    It’s the simplification, reformulation or reorganisation that helps to get the breakthrough.

    Just thinking out loud isn’t quite the same thing.


  • It goes really well with YAGNI. Also DRY without YAGNI is a recipe for premature over-architecting.

    This is also one of the main benefits of TDD. There was a really good video that I can’t find again of a demonstration of how TDD leads you to different solutions than you thought you use when you started. Because you code exclusively for one single requirement at a time, adding or changing just enough code to meet each new requirement without breaking the earlier tests. The design then evolves.