Date.prototype.getTime

# P T Withington (19 years ago)

AFAICT the canonical way to measure elapsed time in ECMAScript
requires taking the difference between two expressions of the form:

(new Date).getTime()

On the Mac SWF plug-in, this expression yields a value that appears
to have 10us precision (ms as a float with 2 digits of precision).
On the Windows SWF plug-in, and the Mozilla Javascript engine, this
expression yields a value with only ms precision (as if it had been
processed by TimeClip).

I am wondering if the new standard should encourage implementors to
return a value with as much precision as the underlying hardware/OS
permits? (And to optimize this idiom, which on the face of it looks
quite expensive: creating a Date object only to extract the value
attribute and discard the object.)

Or, is there already a better, more accurate way to measure elapsed
time? Or, should there be?

# Jeff Walden (19 years ago)

P T Withington wrote:

I am wondering if the new standard should encourage implementors to return a value with as much precision as the underlying hardware/OS permits?

I would assume this doesn't really need to be specified and will be assumed by implementers (at worst in response to user demands) as much as is practical, but then again, I've never spec'd or implemented a programming language.

Or, is there already a better, more accurate way to measure elapsed time? Or, should there be?

I was under the impression that the following was more idiomatic, but I could be wrong:

# Bob Ippolito (19 years ago)

On Jun 27, 2006, at 6:34 PM, Jeff Walden wrote:

P T Withington wrote:

I am wondering if the new standard should encourage implementors
to return a value with as much precision as the underlying
hardware/OS permits?

I would assume this doesn't really need to be specified and will be
assumed by implementers (at worst in response to user demands) as
much as is practical, but then again, I've never spec'd or
implemented a programming language.

Or, is there already a better, more accurate way to measure
elapsed time? Or, should there be?

I was under the impression that the following was more idiomatic,
but I could be wrong:

var before = Date.now(); doStuff(); var after = Date.now(); var elapsedTime = after - before;

Date.now() isn't part of the standard... or if it is, it's not
consistently implemented.

# P T Withington (19 years ago)

On 2006-06-27, at 21:34 EDT, Jeff Walden wrote:

P T Withington wrote:

I am wondering if the new standard should encourage implementors
to return a value with as much precision as the underlying
hardware/OS permits?

I would assume this doesn't really need to be specified and will be
assumed by implementers (at worst in response to user demands) as
much as is practical, but then again, I've never spec'd or
implemented a programming language.

In my experience, 'advice to the implementor' is important. It's non- binding, but it's useful to record.

Or, is there already a better, more accurate way to measure
elapsed time? Or, should there be?

I was under the impression that the following was more idiomatic,
but I could be wrong:

var before = Date.now(); doStuff(); var after = Date.now(); var elapsedTime = after - before;

Date.now() makes lots of sense! Why can't I find that in ECMA 262-3
spec? Is this a Mozilla extension? I don't see it in ActionScript
either...

# Jeff Walden (19 years ago)

Bob Ippolito wrote:

Date.now() isn't part of the standard... or if it is, it's not consistently implemented.

Hmm, totally didn't know that. Since that doesn't seem to be part of the spec, perhaps it would be a nice addition to ES4? At this point I'm guessing it's a Mozilla extension, because Opera doesn't implement it and I'd suspect Opera would have implemented it if IE had implemented it.

# Peter Hall (19 years ago)

I was under the impression that the following was more idiomatic, but I could be wrong:

var before = Date.now(); doStuff(); var after = Date.now(); var elapsedTime = after - before;

Perhaps. But getTime() is what is implemented in es3, and I don't think this can change. The question is over the consistency of the precision of this value between implementations.

I don't have a mac here to test, but I was under the impression that timing within Actionscript had a lower precision on macs than windows (as opposed to P.T.Withington's observation of the reverse).

I don't think it is acceptable to standardise on anything coarser than 1 millisecond precision, and it is probably not practical (or indeed useful) to standardise on anything finer. getTime() should return an int, representing milliseconds, not a float. I don't know what you do about platforms where that precision is not possible to implement; and perhaps this is why es3 does not specify a precision. (It specifies only the fact that the unit is milliseconds, not the degree to which that measurement is accurate).

Peter

# Bob Ippolito (19 years ago)

On Jun 27, 2006, at 8:07 PM, Peter Hall wrote:

I was under the impression that the following was more idiomatic,
but I could be wrong:

var before = Date.now(); doStuff(); var after = Date.now(); var elapsedTime = after - before;

Perhaps. But getTime() is what is implemented in es3, and I don't think this can change. The question is over the consistency of the precision of this value between implementations.

I don't have a mac here to test, but I was under the impression that timing within Actionscript had a lower precision on macs than windows (as opposed to P.T.Withington's observation of the reverse).

I dunno what precision it has elsewhere, but it definitely has msec
precision on OS X (Intel). You're definitely right in that Flash on
Mac OS X is generally slow in comparison to anywhere else, but speed
of execution and precision are very different things...

With the given snippet: var l = 0; var i = 4; while (i > 0) { var n = (new Date()).getTime(); if (n != l) { trace(n); l = n; i--; } }

I just got the following traces (with plenty of other processes
running):

1151465324221 1151465324222 1151465324223 1151465324224

I don't think it is acceptable to standardise on anything coarser than 1 millisecond precision, and it is probably not practical (or indeed useful) to standardise on anything finer. getTime() should return an int, representing milliseconds, not a float. I don't know what you do about platforms where that precision is not possible to implement; and perhaps this is why es3 does not specify a precision. (It specifies only the fact that the unit is milliseconds, not the degree to which that measurement is accurate).

Why not a (64-bit) float? Millisecond precision is pretty weak, it'd
be nicer to have higher precision where available and a float is the
only reasonable way to get it (since nanoseconds don't fit well in a
32-bit integer).

# Peter Hall (19 years ago)

I was, perhaps, thinking of an old issue (chattyfig.figleaf.com/pipermail/flashcoders/2001-August/007612.html)

Peter

# Jim Grandy (19 years ago)

On Jun 27, 2006, at 8:07 PM, Peter Hall wrote:

I don't have a mac here to test, but I was under the impression that timing within Actionscript had a lower precision on macs than windows (as opposed to P.T.Withington's observation of the reverse).

There's a site (www.merlyn.demon.co.uk/js-dates.htm) with a
table comparing timing resolution of different browsers. It shows
that IE and Firefox on Windows have an updating interval of 10 msec
or worse, whereas Mac browsers tend to be closer to 1 msec. (The
Firefox/OSX entry must be for FF1.0 -- FF1.5 on my Mac has an
updating interval of 1 or thereabouts.) Numerical resolution is close
to 1 across the board, though.

jim

# P T Withington (19 years ago)

On 2006-06-27, at 23:32 EDT, Bob Ippolito wrote:

Why not a (64-bit) float? Millisecond precision is pretty weak,
it'd be nicer to have higher precision where available and a float
is the only reasonable way to get it (since nanoseconds don't fit
well in a 32-bit integer).

My point exactly. Even in an interpreter, 1ms is a looong time these
days. And floats are no longer expensive.

Adding something like Date.now() would make optimization easier for
implementers (might translate to only a few instructions on a decent
architecture, e.g., scale the cycle-count register by the system
clock frequency).

# Brendan Eich (19 years ago)

On Jun 28, 2006, at 3:10 AM, P T Withington wrote:

On 2006-06-27, at 23:32 EDT, Bob Ippolito wrote:

Why not a (64-bit) float? Millisecond precision is pretty weak,
it'd be nicer to have higher precision where available and a float
is the only reasonable way to get it (since nanoseconds don't fit
well in a 32-bit integer).

My point exactly. Even in an interpreter, 1ms is a looong time
these days. And floats are no longer expensive.

Adding something like Date.now() would make optimization easier for
implementers (might translate to only a few instructions on a
decent architecture, e.g., scale the cycle-count register by the
system clock frequency).

The "not ready for public review" section of http:// developer.mozilla.org/es4/proposals/proposals.html contains a http:// developer.mozilla.org/es4/proposals/date_and_time.html link that
dangles presently. In the wiki, it contains a sketchy list:

  • Support properties for the get/set method pairs.

  • Add ISO 8601 Date format support.

  • Add Date.now() as a standard way to get the time in milliseconds
    since the epoch.

  • A year from the epoch in nanoseconds requires more mantissa bits
    than are in IEEE754 double. Return a pair [seconds, nanoseconds] from
    a Date.nanotime() static method.

# Peter Hall (19 years ago)

We're talking here about a significant upheaval of how time is measured. I don't think this is necessarily a fair burden on existing implementations, when compared to the value of the feature. Though it is small, you are also potentially hurting performance, in situations where many Date objects are created, because the nanoseconds would have to be calculated, even if they are not needed.

I'd also be against returning a float from getTime(), since it contradicts es3 and breaks compatibility. Programs expecting an int could break if they get a float.

If nanosecond accuracy is desirable, perhaps we could circumvent the performance issue with a "useNanoseconds" flag, and the compatibility issue by adding a date.getNanoseconds(), analogous to the existing getMilliseconds(). Implementations would be permitted to return 0, if they choose not to implement it, or some lower resolution measurement, if they cannot measure to that accuracy.

You could then get the total nanoseconds since 1970, like this:

// this flag can't go in the Date constructor because it is already overloaded enough... Date.useNanoseconds = true; var dt = new Date(); nanoseconds = dt.getTime()*1000 + dt.getNanoseconds();

Peter

# P T Withington (19 years ago)

On 2006-06-28, at 12:16 EDT, Peter Hall wrote:

// this flag can't go in the Date constructor because it is already overloaded enough... Date.useNanoseconds = true; var dt = new Date(); nanoseconds = dt.getTime()*1000 + dt.getNanoseconds();

On 2006-06-28, at 12:12 EDT, Brendan Eich wrote:

  • A year from the epoch in nanoseconds requires more mantissa bits
    than are in IEEE754 double. Return a pair [seconds, nanoseconds]
    from a Date.nanotime() static method.

It seems contradictory to have access to a finer-grained clock be
more expensive. Since its purpose is accuracy, you don't want to
require allocation (and possibly a garbage-collection) to use it. It
is unlikely that you need the range of Date if you are concerned with
precision. Maybe the answer is to leave Date.prototype.getTime alone
and create a new interface with less range and more precision:

Date.tick: getter returning an integer that increases linearly with
elapsed time and is guaranteed to increase by at least 1 when
accessed successively. (I.e., can be used to measure the elapsed
time of a trivial expression.)

Date.tickScale: float that is the ratio of Date.tick units to
nanoseconds.

# John Cowan (19 years ago)

P T Withington scripsit:

Date.tick: getter returning an integer that increases linearly with
elapsed time and is guaranteed to increase by at least 1 when
accessed successively. (I.e., can be used to measure the elapsed
time of a trivial expression.)

Date.tickScale: float that is the ratio of Date.tick units to
nanoseconds.

This is problematic. In order to guarantee the given property of Date.tick (that successive calls return distinct values), the tick scale will have to be fine-grained enough that its results will readily overflow the integral range of an IEEE double.

Unless you mean that the first call to Date.tick in a given program may return 0, in which case there is no problem.

# Peter Hall (19 years ago)

It seems contradictory to have access to a finer-grained clock be more expensive. Since its purpose is accuracy, you don't want to require allocation (and possibly a garbage-collection) to use it. It is unlikely that you need the range of Date if you are concerned with precision. Maybe the answer is to leave Date.prototype.getTime alone and create a new interface with less range and more precision:

Agreed. As Brendan commented, a static nanotime() makes a lot more sense.

Peter

# Brendan Eich (19 years ago)

On Jun 28, 2006, at 9:47 AM, P T Withington wrote:

On 2006-06-28, at 12:16 EDT, Peter Hall wrote:

// this flag can't go in the Date constructor because it is already overloaded enough... Date.useNanoseconds = true; var dt = new Date(); nanoseconds = dt.getTime()*1000 + dt.getNanoseconds();

Forgot to comment on this bug: getTime() returns milliseconds since
epoch, not microseconds.

On 2006-06-28, at 12:12 EDT, Brendan Eich wrote:

  • A year from the epoch in nanoseconds requires more mantissa bits
    than are in IEEE754 double. Return a pair [seconds, nanoseconds]
    from a Date.nanotime() static method.

It seems contradictory to have access to a finer-grained clock be
more expensive. Since its purpose is accuracy, you don't want to
require allocation (and possibly a garbage-collection) to use it.

Precisely what I wrote several times in the wiki -- this lead to a
"group assignment/return" proposal, which then was trumped by Lars's
destructuring assignment proposal. IIRC the discussion page for
destructuring recounts some ways to optimize array return to avoid
allocation, and we therefore agreed not to worry about the cost for
the purposes of nanotime. That does mean you either have to optimize
if your clock is going to be perturbed, or else have a less accurate
clock.

Just having a Date.nanotime() returning the current nanoseconds
within the current second is problematic because users would have to
call Date.now() separately and somehow cope with the latter falling
in a different second. So we think we want an atomic nanoseconds- since-epoch primitive.

It is unlikely that you need the range of Date if you are
concerned with precision. Maybe the answer is to leave
Date.prototype.getTime alone and create a new interface with less
range and more precision:

We absolutely must leave getTime alone, we are not breaking backward
compatibility except in selected and few ways that actually improve
the bugscape.

Date.tick: getter returning an integer that increases linearly
with elapsed time and is guaranteed to increase by at least 1 when
accessed successively. (I.e., can be used to measure the elapsed
time of a trivial expression.)

Date.tickScale: float that is the ratio of Date.tick units to
nanoseconds.

That's not bad, but why not just give 'em nanoseconds since the epoch
so they don't have to normalize.

# P T Withington (19 years ago)

On 2006-06-28, at 15:47 EDT, Brendan Eich wrote:

Date.tick: getter returning an integer that increases linearly
with elapsed time and is guaranteed to increase by at least 1 when
accessed successively. (I.e., can be used to measure the elapsed
time of a trivial expression.)

Date.tickScale: float that is the ratio of Date.tick units to
nanoseconds.

That's not bad, but why not just give 'em nanoseconds since the
epoch so they don't have to normalize.

a) For when nanoseconds is too coarse? tickScale could be to any
convenient unit, seconds would be fine too.

b) So the language implementor can choose a scale that fits their
implementation: fine enough to be able to time the simplest
expression accurately, but not so fine as to cause huge overhead in
the implementation.

I think the applications for Date and 'tick' are different. Date
needs to be relative to an epoch because people use it for absolute
times. 'tick' does not. It is for very accurate measurement of
intervals. It can start with an arbitrary value. It can wrap. From
the size of integers and the scale, you can determine the maximum
interval you can measure, use modular arithmetic, and use Date to
compensate if you really need that granularity for a longer interval
(I believe such applications will be rare).

I even think that having an accurate, fine-grained clock as a
primitive is important enough that I would give up synchronizing it
with Date for greater accuracy. If computing [ milliseconds,
nanoseconds ] takes more time that the expression I am trying to
measure, I have a problem.

[In case you haven't guessed, I am writing a Javascript profiler in
Javascript and am being thwarted by the lack of granularity in Date.
And, I didn't invent the tick/scale interface. I stole it from the
DEC Alpha which has a control register that increments at the CPU
clock frequency and a register that tells you what that frequency
is. This register is readable in user space, so makes a really fine
profiling instrument.]

# Brendan Eich (19 years ago)

On Jun 28, 2006, at 1:39 PM, P T Withington wrote:

On 2006-06-28, at 15:47 EDT, Brendan Eich wrote:

Date.tick: getter returning an integer that increases linearly
with elapsed time and is guaranteed to increase by at least 1
when accessed successively. (I.e., can be used to measure the
elapsed time of a trivial expression.)

Date.tickScale: float that is the ratio of Date.tick units to
nanoseconds.

That's not bad, but why not just give 'em nanoseconds since the
epoch so they don't have to normalize.

a) For when nanoseconds is too coarse? tickScale could be to any
convenient unit, seconds would be fine too.

People want finer granularity when benchmarking. In the interest of
future-proofing, we went to ns. To avoid worrying about wrapping
and races, we opted to make nanotime return the whole enchilada.

b) So the language implementor can choose a scale that fits their
implementation: fine enough to be able to time the simplest
expression accurately, but not so fine as to cause huge overhead in
the implementation.

This is a good point.

I think the applications for Date and 'tick' are different. Date
needs to be relative to an epoch because people use it for absolute
times. 'tick' does not.

Usually not, agreed.

It is for very accurate measurement of intervals. It can start
with an arbitrary value. It can wrap.

I continue to think that wrapping is problematic.

From the size of integers and the scale, you can determine the
maximum interval you can measure, use modular arithmetic, and use
Date to compensate if you really need that granularity for a longer
interval (I believe such applications will be rare).

There's a race if you don't have an atomic sample of the coarser and
the finer unit clocks.

I even think that having an accurate, fine-grained clock as a
primitive is important enough that I would give up synchronizing it
with Date for greater accuracy. If computing [ milliseconds,
nanoseconds ] takes more time that the expression I am trying to
measure, I have a problem.

Agreed already! ;-)

[In case you haven't guessed, I am writing a Javascript profiler in
Javascript and am being thwarted by the lack of granularity in
Date. And, I didn't invent the tick/scale interface. I stole it
from the DEC Alpha which has a control register that increments at
the CPU clock frequency and a register that tells you what that
frequency is. This register is readable in user space, so makes a
really fine profiling instrument.]

I thought this was familiar :-). I will take your proposal, noodle
on it, and wiki something to get TG1 buy-in. Thanks,

# Neil Mix (19 years ago)

On Jun 28, 2006, at 4:14 PM, Brendan Eich wrote:

From the size of integers and the scale, you can determine the
maximum interval you can measure, use modular arithmetic, and use
Date to compensate if you really need that granularity for a
longer interval (I believe such applications will be rare).

There's a race if you don't have an atomic sample of the coarser
and the finer unit clocks.

Maybe there could be a corresponding tickEpoch (a Date object) that
represents the time when Date.tick was set to 0?

# Gordon Smith (19 years ago)

Regarding the suggestion that an int be used to represent the number of milliseconds since 1970...

There have been approximately one trillion milliseconds since then, and an int can't store more than about 2 billion. Aren't milliseconds-since-the-epoch represented as a float not because this number can or should be fractional but because a float can store integers up to 2^52 (2^53?) or so rather than 2^31?

# P T Withington (19 years ago)

On 2006-06-28, at 17:14 EDT, Brendan Eich wrote:

From the size of integers and the scale, you can determine the
maximum interval you can measure, use modular arithmetic, and use
Date to compensate if you really need that granularity for a
longer interval (I believe such applications will be rare).

There's a race if you don't have an atomic sample of the coarser
and the finer unit clocks.

Right. Well if you give tick a decent range (make it a double
instead of an int), I think that issue moot because you don't need to
measure really long events with that much precision. If I calculated
right, that would give you about 50 years to nanosecond precision.

I thought this was familiar :-). I will take your proposal, noodle
on it, and wiki something to get TG1 buy-in. Thanks,

Thank you!

# Lars T Hansen (19 years ago)

Jeff Walden writes:

P T Withington wrote:

I am wondering if the new standard should encourage implementors to return a value with as much precision as the underlying hardware/OS permits?

I would assume this doesn't really need to be specified and will be assumed by implementers (at worst in response to user demands) as much as is practical, but then again, I've never spec'd or implemented a programming language.

The spec states clearly (15.9.1.1) that time is measured in milliseconds. Furthermore, new Date(v) performs TimeClip on v to transform v to milliseconds, and new Date(,,,,,,ms) converts ms to integer. Clearly, for all practical purposes no portable program can assume that time measured by the Date object is anything but an integer number of milliseconds.

The question is perhaps whether a portable program can assume that time is always a number of milliseconds. This seems less clear, but it seems to be strongly implied.

That being so, an implementation choosing to support greater precision for time risks breaking portable programs since the external representation of a time value is not as expected, ie, programs that expect integers in the external representation of time values may find non-integers.

Programs that only use Date to obtain timestamps that are always represented internally as numbers are probably not at much risk, though.

(Speaking as one of the implementors, I've never considered the option of representing time to a finer precision than milliseconds.)