STS (Secure Time Seeding) uses server time from SSL handshakes, which is fine when talking to other Microsoft servers, but other implementations put random data in that field to prevent fingerprinting.
STS (Secure Time Seeding) uses server time from SSL handshakes, which is fine when talking to other Microsoft servers, but other implementations put random data in that field to prevent fingerprinting.
While the root issue was still unknown, we actually wrote one. It sort of made sense. Check that the date from isn’t later than date to in the generated range used for the synchronization request. Obviously. You never know what some idiot future coder (usually yourself some weeks from now) would do, am I right?
However, it was far worse to write the code that fulfilled the test. In the very same few lines of code, we fetched the current date from
time.now()
plus some time span asdate.to
, fetched the last synchronization timestamp from db asdate.from
, and then validated thatdate.from
wasn’t greater thandate.to
, and if so, log an error about it.The validation code made no logic sense when looking at it.
Feels like writing
Assert.is(false,“This should never happen”);
and seeing it pop up one time?
I feel like the 3rd party API should have had some error checking, although that might have strayed too far into a client’s business logic.
If it is an API of incidents, that suggests past incidents. And the whole “never trust user data” kinda implies they should throw an error if you request information about a tinerange in the future.
I guess, not throwing an error does allow the 3rd party to “schedule” an incident in the future, eg planned maintenance/downtime.
But then, that isn’t separation of concerns. Ideally those endpoint would be separate. One for planned hypothetical incidents and one for historical concrete incidents.
It’s definitely an odd scenario where you are taking your trusted data (from your systems and your database), then having to validate it.