• msephton 4 days ago

    This reminded me of the first(?) sub-pixel typeface (by Miha as posted on Typophile in 2009) https://adamnorwood.com/notes/typophile-user-miha-is-doing-s...

    And the first sub-pixel font Millitext from 2008, as mentioned in another comment.

    • keville 4 days ago

      Apple II (1977) did an early version of this; it essentially had purple and green addressable colors in each pixel. With both on in a single pixel you got white text, but could also leverage two adjacent pixels, one with purple and one with green, to produce a half-pixel offset that could produce a smoother diagonal line than typical fixed coordinates.

      https://en.wikipedia.org/wiki/Subpixel_rendering

    • dahart 4 days ago

      The video is really well done and interesting, and makes you think the right way about sub-pixels. Though it does seem a little amusing to me that the process Japhy used is basically exactly the same process the display is already using for antialiasing fonts, he’s just doing it manually. We already have sub-pixel art, pretty much all the time. ;) I haven’t tried, but in theory the hidden message thing at the end can be done purely by reducing your font size to sub 1-pt.

      • layer8 4 days ago

        There was also a period in the 2000s where icon graphics were designed with subpixel effects in mind. This was when LCDs displaced CRTs (previously the design took the blurriness of CRTs into account) up until to when the mobile revolution and high-DPI displays started favoring a more scalable solution (at the cost of less nice icons).

      • ludicrousdispla 4 days ago

        I've played with this quite a bit over the past few years and want to clarify that the colors of light from the display only combine in your visual system (retina/brain) so that you perceive colors other than red, green, and blue.

        • Rygian 4 days ago

          The interesting bit here, for me, is that the eyes perceive _as almost the same color_ the yellow in a yellow lemon, and the yellow made by the combination of green and blue from a screen. I find that convergence fascinating.

          • atwrk 4 days ago

            That is because we also can't actually directly perceive the yellow from that lemon, i.e. we don't possess cones that have their max sensitivity at 570 nm (yellow). Instead yellow is created in our brain by combining the data from the M and L cones: If both signal at about equal intensity, our brain calculates that to be yellow. So the perceived yellow can actually be 570 nm, or 540 nm (yellow-green) plus 600 nm (orange) or similar. Only if the distance between both wavelengths is too high this stops working.

            • dahart 4 days ago

              These two different yet same yellows have a fancy name too: metamers. [1] I think it’s super interesting too, and you can even create metamers out of non-metamers using the right light sources. As a trichromat subject to the same metamers as most humans, I want to know what it feels like to be a tetrachromat who can see the differences between colors I can’t tell apart. Or a bi/mono-chromat (aka color blind) where metamers really start to stack up.

              https://en.wikipedia.org/wiki/Metamerism_(color)

              • p1mrx 4 days ago

                  red + green = yellow
                  green + blue = cyan
                • Rygian 3 days ago

                  Thanks, you're right! I typed without thinking.

              • unwind 4 days ago

                Sure, but that is true at the "macro" scale too of (for instance) discrete RGB LEDs, or any other case were color is simulated by emitting red green and blue light.

                There is nothing unique in the regard with the sub-pixels except they're small, right?

                • nkrisc 4 days ago

                  Correct. If you ever have the chance go to close to one of those enormous displays you would see at a convention center or hanging above a sports arena. They often have LEDs about the size you’d expect when you think of a typical LED.

                  But with those you’re far enough away that the apparent size of them is similar enough to the sub-pixels in your monitor, so your visual system combines them.

                  • ludicrousdispla 4 days ago

                    Yes, I just wanted to emphasize that the r, g, b light does not really 'combine' but is instead perceived by people as the additional or secondary colors.

                    For example, if a display is emitting red and green light then the light reaching a viewer's retina will be red and green light, not yellow light.

                    • dahart 4 days ago

                      There’s no way to tell the difference.

                      RGB light does actually ‘combine’, physically, before we ‘perceive’ the color. It’s because we (most of us) have 3 sensors each with specific wavelength response functions. The physical output of those sensors is the same for a red+green combination as it is for a yellow combination, and therefore the color has been combined as part of measuring the color.

                      • ludicrousdispla 4 days ago

                        >> RGB light does actually ‘combine’, physically, before we ‘perceive’ the color.

                        We might be arguing semantics but I'm going to say no, they do not physically combine before we perceive the color. This is supported by the link you provided on metamerism in a related comment.

                        • dahart 4 days ago

                          The signal the cones output that the brain gets is lossy, it cannot be restored to its original wavelength histogram, and in my book that qualifies as “combined”. If you’re trying to say that red+green light doesn’t turn into yellow wavelength light before hitting the cones, then obviously yes. Otherwise, it seems like you’re arguing against your earlier comment and not mine. Your example was yellow light vs red+green light. The physical signals that our brain receives are physically identical in those two cases, not just perceived as the same, therefore by your earlier definition of combine, the colors do combine before it’s perceived. Seeing red+green light as yellow is not a matter of the brain interpreting two different signals as being the same, the signals themselves are the same, before the brain ever gets the signals, as a byproduct of tristimulus response. We don’t need human perception to demonstrate this either, a digital RGB camera has the same property, that red+green are physically identical to yellow when you measure using RGB primaries, with no perception involved. If you want to tell red+green and yellow apart, you need either more primaries or different primaries, and perception isn’t part of the equation. Your original claim on this point that the brain is involved was incorrect. The retina is involved in the sense that the initial output of cones in response to seeing either red+green or yellow light is a signal we call yellow. The moment light was converted by cones into electrical signal, that signal has lost any information that could distinguish between red+green and yellow. And yes, all of this is supported by the link I provided on metamerism.

                  • esperent 3 days ago

                    The video does briefly address this at the 1.20 mark. It could have been clearer though.

                    • dahart 4 days ago

                      It’s due to having only 3 types of sensors. They combine as byproduct of capturing the photons, and while this is a part of the visual system indeed, it happens even before the retina and brain get ahold of the signal. The signal itself is inherently representing an already-combined color.

                    • JKCalhoun 4 days ago

                      Green being perceptibly (perceptually?) brighter to humans than Red and Blue — perhaps you need to dial down the levels when displaying it.

                      • spookie 4 days ago

                        Standard colorspaces already takes this into account :)

                        It's important in rendering to take your colorspaces seriously, from the engine driving them to the artist making the art. There are some crazy optimizations that take your perception of colour into account too, texture compression relies on this to some capacity (blues are given less bits, for example).

                      • ikesau 4 days ago

                        I highly recommend people click through the rest of this guy's videos. Lots of fun and playful experiments. :)

                        • agumonkey 4 days ago

                          side note, cleartype was quite a nice improvement in windows when it came out https://en.wikipedia.org/wiki/ClearType

                          (also based on subpixel aliasing demonstrated in the video, just not mentioned)

                          • layer8 4 days ago

                            I hated the color fringes of Cleartype (yes, even after running the optimizer), until my eyes became bad enough that the fringes stopped being so prominent ;). It’s still worse for colored text, though, as in syntax highlighting.

                          • Luc 4 days ago

                            You can do very slow yet smooth scrolling for games by using the subpixels - at least on platforms like e.g. the Game Boy Color where you know the pixel geometry. Of course you need to use 'whole' pixels (composed of RGB) so the perceived color doesn't change.

                            • msephton 4 days ago

                              In the games I make I always track positions at a subpixel level and then I can choose whether or not to draw things with that accuracy depending on my needs. I might want a shape to move more smoothly at a sub-pixel level, or round things to whole pixels. Both ways are useful for different reasons or in different situations.

                              • adgjlsfhk1 4 days ago

                                I do think there's a really interesting alternate world in which we never invented the colorpixel, and image formats instead were represented by an array of grayscale pixels with a repeating color mask.

                              • smusamashah 4 days ago

                                I dont understand the last bit where hi hid text message in sub pixels. How did that work?

                                • ludicrousdispla 4 days ago

                                  The rationale is that you can control the position of the visual parts that make a letter by changing the color of the larger pixel.

                                  http://www.msarnoff.org/millitext/

                                  https://millitext.gk.wtf/

                                  • msarnoff 4 days ago

                                    I’m the original millitext creator and I’m very happy to see that someone made a new web-based generator. I wrote my original one in Ruby in 2008 and then stopped writing Ruby soon after that, leaving it to decay as the language’s ecosystem evolved.

                                  • dahart 4 days ago

                                    It’s what might happen if you render ClearType text at 1px wide.

                                  • Darkenetor 4 days ago

                                    There's a generator for those hidden subpixel message textures:

                                    https://jsbin.com/korotaluso/edit?js,output