It seems to me the user experience for high DPI displays like the Apple Retina display will be horrible for certain scenarios.
If I'm remoting into a windows machine using Remote Desktop or displaying an X application using X11 remoting then the host DPI settings are probably going to result in a bad experience. I already see this in when using my work laptop to remote into my desktop.
At work I have multiple 24" 1920x1080 monitors. When I RDP into the machine from my 14" 1920x1080 work laptop I get text in the code editor that is barely legible. I can't imagine how bad this would be on the Retina display!
The Remote Desktop and other remote display clients should enable a pass-through mechanism for client DPI. This way the host can probably format the display for remoting.
To expand on this idea, if all UI settings relating to size were stored in a device independent way (for example: twips, 1/1440 of an inch) then a translation could take place to always match the physical dimensions of the UI. You could even have a constant scaling factor so you can change this for a device. For example, I may not want my cell phone displaying at full physical dimensions, maybe it's okay to go with 50% or 25% and I know I have to squint at it, but I want my laptop or tablet to use 80-90% physical dimensions, etc...
If no one has thought of this yet well it's here first... prior art recorded.
Monday, July 23, 2012
Thursday, February 23, 2012
Native or Managed?
I want to start a project with a friend to upgrade the UI of windbg. It's not too bad right now, but more importantly I think it could be so much more.
So here's the problem: Making an "awesome" UI is much easier in C# than in C++ using native code. However, the APIs for the debugging engine are all COM based and there aren't bindings for C#. There are a few unofficial bindings, but nothing really available. So, we'd probably have to roll our own bindings, which is not something I want to do.
I think a hybrid option would be to use C++/CLI. At least we'd be able to use the COM methods pretty easily and maybe even use the header files without any modification. We could also then use the managed API for the GUI. I think I'm going to suggest this as the route to take.
There's some concerns that .Net wouldn't be installed on all the environments you'd want to use the new UI but I don't think that should stop us. It's pretty easy to install and a version of .Net ships with modern Windows distributions already.
A bit to think about, but I think the C++/CLI approach is going to be the way to go. It will get me writing code faster and having SOMETHING working much faster than a native solution. I've recently worked on native GDI+ code and although it's nicer than GDI it's still not something I'd want to force myself on for a GUI intensive project.
So here's the problem: Making an "awesome" UI is much easier in C# than in C++ using native code. However, the APIs for the debugging engine are all COM based and there aren't bindings for C#. There are a few unofficial bindings, but nothing really available. So, we'd probably have to roll our own bindings, which is not something I want to do.
I think a hybrid option would be to use C++/CLI. At least we'd be able to use the COM methods pretty easily and maybe even use the header files without any modification. We could also then use the managed API for the GUI. I think I'm going to suggest this as the route to take.
There's some concerns that .Net wouldn't be installed on all the environments you'd want to use the new UI but I don't think that should stop us. It's pretty easy to install and a version of .Net ships with modern Windows distributions already.
A bit to think about, but I think the C++/CLI approach is going to be the way to go. It will get me writing code faster and having SOMETHING working much faster than a native solution. I've recently worked on native GDI+ code and although it's nicer than GDI it's still not something I'd want to force myself on for a GUI intensive project.
Monday, January 30, 2012
ERROR_WORKING_SET_QUOTA and IO Completion Ports
While working on a streaming engine I came across an interesting little hole in the MSDN documentation for Completion Ports. Completion Ports allow for extremely efficient throughput of data. The way this is accomplished is by queuing IO to a "Completion Port" and then associating the Completion Port with one or more threads.
The reason this is so fast is because Windows can then chose which thread will complete the IO operation. Using a thread pool allows Windows to always pick the last executed thread in LIFO order. This greatly reduces TLB thrashing and other issues associated with a context switch on the CPU. When the threads aren't processing IO they are in a wait state. The IO is processed in a FIFO order.
The usual Completion Port architecture looks something like this:
When any IO operation completes Windows will smartly choose a thread waiting on GetQueuedCompletionStatus to wake up and send the IO result. The call to GetQueueCompletionStatus will return and data processing can begin. Ideally, an application would probably only have one Completion Port and perform all IO processing on this port/thread pool pair.
Everything about this is awesome, except... The documentation is really vague about how to handle ReadFile/WriteFile operations returning success (and thus not being queued). You need to make sure you call GetOverlappedResult (and probably with the Wait parameter set to FALSE) or you will start getting strange errors.
After a few of these immediate IO completions my streaming engine started hitting ReadFile failures described by "ERROR_WORKING_SET_QUOTA." Nowhere in the documentation for Completion Ports or GetOverlappedResult does it indicate this should be called in the Completion Port case. I suppose it's implied by the fact that you're using Overlapped IO, but still an explicit indication on MSDN would probably be useful.
This may be obvious to some but I wasted about an hour on this, so hopefully this post will shorten that time for someone else.
The reason this is so fast is because Windows can then chose which thread will complete the IO operation. Using a thread pool allows Windows to always pick the last executed thread in LIFO order. This greatly reduces TLB thrashing and other issues associated with a context switch on the CPU. When the threads aren't processing IO they are in a wait state. The IO is processed in a FIFO order.
The usual Completion Port architecture looks something like this:
- Create a Completion Port using CreateIoCompletionPort.
- Create the threads for the thread pool and call GetQueuedCompletionStatus to associate the threads with the Completion Port.
- Associate file HANDLEs (opened in Overlapped IO mode) to the Completion Port.
- Issue IO operations using ReadFile/WriteFile.
- Process the IO operations in the thread pool threads.
When any IO operation completes Windows will smartly choose a thread waiting on GetQueuedCompletionStatus to wake up and send the IO result. The call to GetQueueCompletionStatus will return and data processing can begin. Ideally, an application would probably only have one Completion Port and perform all IO processing on this port/thread pool pair.
Everything about this is awesome, except... The documentation is really vague about how to handle ReadFile/WriteFile operations returning success (and thus not being queued). You need to make sure you call GetOverlappedResult (and probably with the Wait parameter set to FALSE) or you will start getting strange errors.
After a few of these immediate IO completions my streaming engine started hitting ReadFile failures described by "ERROR_WORKING_SET_QUOTA." Nowhere in the documentation for Completion Ports or GetOverlappedResult does it indicate this should be called in the Completion Port case. I suppose it's implied by the fact that you're using Overlapped IO, but still an explicit indication on MSDN would probably be useful.
This may be obvious to some but I wasted about an hour on this, so hopefully this post will shorten that time for someone else.
Subscribe to:
Posts (Atom)