To offer a differing opinion, why is null helpful at all?
If you have data that may be empty, it’s better to explicitly represent that possibility with an Optional<T> generic type. This makes the API more clear, and if implicit null isn’t allowed by the language, prevents someone from passing null where a value is expected.
Or if it’s uninitialized, the data can be stored as Partial<T>, where all the fields are Optional<U>. If the type system was nominal, it would ensure that the uninitialized or partially-initialized type can’t be accidentally used where T is expected since Partial<T> != T. When the object is finally ready, have a function to convert it from Partial<T> into T.
Ignoring the fact that a lot of languages, and database systems, do not support generics (but do already support null), you’ve just introduced a more complex type of null value; you’re simply slapping some lipstick on it. 😊
In a discussion about whether null should exist at all, and what might be better, saying that Optional values aren’t available in languages with type systems that haven’t moved on since the 1960s isn’t a strong point in my view.
The key point is that if your type system genuinely knows reliably whether something has a value or not, then your compiler can prevent every single runtime null exception from occurring by making sure it’s handled at some stage and tracking it for you until it is.
The problem with null is that it is pervasive - any value can be null, and you can check for it and handle it, but other parts of your code can’t tell whether that value can or can’t be null. Tracking potential nulls is in the memory of the programmer instead of deduced by the compiler, and checking for nulls everywhere is tedious and slow, so no one does that. Hence null bugs are everywhere.
Tony Hoare, an otherwise brilliant computer scientist, called it his billion dollar mistake a decade or two ago.
To offer a differing opinion, why is null helpful at all?
If you have data that may be empty, it’s better to explicitly represent that possibility with an
Optional<T>
generic type. This makes the API more clear, and if implicit null isn’t allowed by the language, prevents someone from passing null where a value is expected.Or if it’s uninitialized, the data can be stored as
Partial<T>
, where all the fields areOptional<U>
. If the type system was nominal, it would ensure that the uninitialized or partially-initialized type can’t be accidentally used whereT
is expected sincePartial<T>
!=T
. When the object is finally ready, have a function to convert it fromPartial<T>
intoT
.Ignoring the fact that a lot of languages, and database systems, do not support generics (but do already support null), you’ve just introduced a more complex type of null value; you’re simply slapping some lipstick on it. 😊
Type-safe lipstick :)
Or just implement null as a separate type, as php does. Then you have to accept
string | null
(or its shorthand,?string
) if you want nulls.In a discussion about whether null should exist at all, and what might be better, saying that Optional values aren’t available in languages with type systems that haven’t moved on since the 1960s isn’t a strong point in my view.
The key point is that if your type system genuinely knows reliably whether something has a value or not, then your compiler can prevent every single runtime null exception from occurring by making sure it’s handled at some stage and tracking it for you until it is.
The problem with null is that it is pervasive - any value can be null, and you can check for it and handle it, but other parts of your code can’t tell whether that value can or can’t be null. Tracking potential nulls is in the memory of the programmer instead of deduced by the compiler, and checking for nulls everywhere is tedious and slow, so no one does that. Hence null bugs are everywhere.
Tony Hoare, an otherwise brilliant computer scientist, called it his billion dollar mistake a decade or two ago.