I am going to be harsh here, but I think it’s necessary: I don’t think anyone should use Emit-C in production. Without proper tooling, including a time-traveling debugger and a temporal paradox linter, the use case just fails compared to more established languages like Rust.
No one is claiming this is necessary. It's a toy language built for fun.
Woosh
Given how often people seriously say things like the top level comment being responded to around here, an explicit /s is almost necessary since it can be hard to distinguish from the usual cynical dismissive comments.
I down vote explicit /s' on principle. If you have to add it, you're not doing it right (imo).
I take it you've never heard of Poe's Law?
https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
err, I mean. I have not! What's that?
Touche.
Please ELI5... I know there's a joke in there, but I'm missing it.
The humor lies in the inherent absurdity of the critique itself. Obviously no one will use this in production. There’s nothing especially clever you’re missing.
I was confused at the rust part, which also made me realize that it was part of a joke.
The temporal paradox linter could have given it away too :)
I think a syntax highlighter and in-line documentation for future language features before they are created is also necessary. I'll stick with more established languages, too. Time in a single direction is already hard.
Submitted to r/altprog ; I love a language that can murder variables, heh.
This one can murder its own grandfather.
For the real computer scientists out here, what would time complexity notation be like if time traveling of information was possible (i.e. doing big expensive computations and sending the result back into the past)?
Surprisingly there is prior work on this! https://www.scottaaronson.com/papers/ctchalt.pdf . Apparently a Turing Machine with time travel can solve the halting problem
With time travel, isn't the halting problem trivially solvable? You start the program, and then just jump to after the end of forever and see if the program terminated.
> With time travel, isn't the halting problem trivially solvable? You start the program, and then just jump to after the end of forever and see if the program terminated.
Some programs won't halt even after forever, in the sense of an infinite number of time stops. For example, if you want to test properties of (possibly infinite) sets of natural numbers, there's no search strategy that will go through them even in infinite time.
(Footnote that I'm assuming, I think reasonably but who knows what CSists have been up to?, a model of computation that allows the performance of countably, but not uncountably, many steps.)
But if you are at the present, and dont receive a future result immediately, can't you assume it never halts? Otherwise you would have received a result.
I don't think so. That's assuming the program will always be in a frame of reference which is temporally unbounded. If, for example, it fell into a black hole it would, IIUC, never progress (even locally) beyond the moment of intercepting the event horizon.
I think you can only time travel a finite time.
If you shift the computational result back in time to the same time you started it, your O notation is just a zero, and quite scalable. Actually it would open the programming market up to more beginners, because they can brute force any computation without caring about the time dimension. Algorithm courses will go broke, and the books will all end up in the remainder bin. Of course, obscure groups of purists will insist on caring about the space AND time dimensions of complexity, but no-one will listen to them anymore.
I whimsically imagine some version of bi-directional Hoare logic.