![]() In your example, if the desired time was 4ms, and the tick occured at 3.99 ms, the difference magnitude would be. If I am less than 1/2 a tick interval from the desired time I execute the frame, then add my frame interval to my current desired time to get the next desired time. Then with each tick, I would compare how close I am with that time. My normal approach for establishing a desire framerate with a less than reliable OS like windows, would be to establish the desired time that the next frame should occur. Therefore, to reduce the FPS error, using the timer approach, I'd need a faster timer.That strikes me as a relatively naive newbie type of approach to using a timer, which seems incongruous given your demonstrated experience with programming. That's quite a bit different from 250 FPS, so, not good. So, that adds one millisecond to our time, so we're now at 4.99ms. Therefore, we don't do anything on that timer event, and wait until the next one. For the sake of argument, let's say that 3.99ms had elapsed when this timer event fired. Therefore, let's assume that the timer fires and we're just under our desired elapsed time (for our FPS speed). The best resolution we can get with a timer is 1 millisecond. And then, upon reading the MSDN for Sleep, I noticed this statement: Mainly, the best resolution I can get is 1 millisecond. I thought about putting a Sleep call in the loop, thereby giving control back to Windows for periods of time (letting Windows know we're not really doing much), but that presents the exact same problem as using the Timer control. In other words, it makes Windows think we're working very hard, when we're not really doing much of anything. However, as Olaf Schmidt has pointed out, a tight loop like this tends to exercise our CPU fans. There is a DoEvents in the loop which allows the user to cancel if they so desire. No timer, no sleep, just a tight loop until the correct amount of time has expired (via checking QueryPerformanceCounter). Therefore, to reduce the FPS error, using the timer approach, I'd need a faster timer.Īnd that brings us to the alternative of a looping approach, and that's what I'm using. Let me talk about the timer approach first. ![]() I'm using the loop approach, both approaches have problems. Overall, there are basically two ways to slow it down: 1) a timer, and 2) a loop. So, the issue becomes: How do I slow things down? And, at 250 FPS, we'd like each frame to take 4ms (and hopefully that's obvious). ![]() I've also developed a simple timing class that uses QueryPerformanceCounter, so I've got plenty of resolution regarding being able to time things. In fact, I can get up to about 2500 FPS if I want (using my type of data and a fairly new computer with a decent GPU). I can play the frames faster than 250 FPS with DirectX. I still want to "play" them all, and at the correct rate. If they don't actually "see" each frame, I don't care. Yeah, I know that's faster than the monitor's refresh rate, but I don't care. I'd like to play back those frames at something fairly close to that original capture frame-rate. ![]() (It's actually 240, but the example math is easier if I say 250.) I'm developing a DirectX9 application where I'm playing frames captured at about 250 frames per second (FPS). Many will probably ask "why", so I'll explain now. Ideally, I'd like a Sleep API call with better (faster) resolution than a millisecond. We can cut it to the scale of 2 digits fraction, and at the very beginning compute the average overhead of reading the time, and then remove it off the measurement.This is a strange one. The most accurate timestamp we can get for Mac OS X is probably this: python3 -c 'import datetime print(().strftime("%s.%f"))'īut we need to keep in mind that it takes around 30 milliseconds to run. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |