Google is building its own DeX: First look at Android's Desktop Mode
androidauthority.com448 points by logic_node a year ago
448 points by logic_node a year ago
Taking better advantage of a display is nice but imo the really exciting part of desktop mode is the planned integration with Google's Linux Terminal app (i.e. 1st party linux VM support). I have a Samsung DeX device and while you can get a basic dev environment working easily it can be really cumbersome to make it comfortable to use and integrate with your normal tablet workflow. Being able to install full-fat linux apps and run them in a window would be a complete game changer.
source for planned integration: https://issuetracker.google.com/issues/392521081?utm_source=...
Chrome OS allowed this even before 2020. So you could open Linux (even GUI) and android app right next to each other... Had whole JS dev workflow/toolchain running on that ( did not want to clog my main computer with that ). Problem with mixing apps is that for some you have to use mouse/ stylus because their GUI was not meant to be touched.
It's a shame that Chrome OS was subsumed by Android instead of the other way around. IMO in many ways it had better foundations.
> IMO in many ways it had better foundations
Security-wise: True; but Android is a gigantic yet well-oiled ecosystem at this point, from silicon designers to manufacturers to vendors to developers, running on handhelds to TVs to wearables to gaming devices (including AR/VR consoles).
> shame that Chrome OS was subsumed by Android
ChromeOS had a decade but Google is wise focus on just one desktop platform. I don't think it should surprise anybody that a platform with 3bn users & 2mn odd apps won out.
Using android on a laptop with a keyboard and mousepad was always an awkward experience. It's kind of like trying to use an iPad as your main computing device. Similarly bad experience.
Hopefully they work on that.
Similar with a keyboard and mouse with Android TV - I thought it would be useful for YouTube searches etc, the UI is so ill adapted to keyboard I gave up.
It's always funny charging my phone off the USB C for my monitor, nudging my mouse and seeing a pointer appear on the screen though.
Partially, it still has lots of issues that were never fixed.
https://www.chromium.org/chromium-os/chrome-os-systems-suppo...
https://www.chromium.org/chromium-os/developer-library/guide...
Note specially the parts of WIP, missing features, to be yet done, and so on.
Dex is annoyingly close to being really useful.
I think Samsung recently added a "desktop Dex" mode that's supposed to be less mobile-ui. I haven't tried it tho.
> Dex is annoyingly close to being really useful.
I feel this a lot. I use it daily, mostly as a thin client for remote desktop use but there are little niggles that would make it better. Examples:
- Let me control how the top bar and taskbar are viewed
- Let games capture the mouse in remote desktop (for fps type games)
- Fix the small issues that cause the mouse capture to fail on steam link occasionally
- Fix rendering issues with firefox while in desktop mode
- Let the youtube UI work in a more "desktop" way while in dex mode
These might be mostly app responsibilities, but if they could fix some of this stuff dex would be a dream instead of just being mostly useful.
I just wish it would do 4K resolution out of the box.
The hardware can do it, it's just that the system settings won't show you the 4K resolution option for some reason. But you can do some hacks to make it appear and then it works just fine.
You need to install a nondescript app called 'Samsung Good Lock' from the Samsung store (not available in Play store), and use that to side-load an app called 'Multistar', which is an app to tweak display settings. From that side-loaded app you need to tap the 'I Samsung DeX' which does various setting changes to "Make Dex even more friendly", it doesn't specify what it does exactly, but it'll make the 4K resolution option appear in the system settings.
This all feels real sketchy and I don't understand why Samsung doesn't just enable 4K resolution officially, because the hardware is clearly capable of it.
With every OneUI update there are rumors that it'll natively support 4K, but so far that hasn't happened AFAIK. Admittedly I haven't used Dex in a while for myself, but judging from recent Reddit posts this hack is still needed.
Samsung's Good Lock is kind of their testing ground for new features.
It lets people who want to tinker do it, while keeping people who probably shouldn't tinker from doing it.
It's not available in the Google Play store because the play store rules are really stupid. A lot of apps aren't available there.
According to the Samsung Store it is developed by developer 'Good Lock Labs'. According to this Wikipedia source [0] they developed this app 'in cooperation with Samsung'. Browsing through the sources I did find a 2016 article from Samsung themselves [1] about Good Lock, indeed confirming it is theirs.
Also, it looks like Good Lock is now also available on the Google Play Store, and there it lists Samsung Electronics as the developer [2].
I guess this does make it less sketchy of an app to use, but it still feel wrong to have to do so many weird steps to get a menu option working.
[0] https://en.wikipedia.org/wiki/Good_Lock [1] https://news.samsung.com/global/make-your-galaxy-smartphone-... [2] https://play.google.com/store/apps/details?id=com.samsung.an...
These instructions sound like a parody sketch about bad UX.
Aye, to some degree they are, but I'm also glad that android is open/hackable enough that goodlock lets you add these additional preferences. (I also use it, for me it was for ultrawide resolutions)
I remember when they presented the S10, with the initial implementation of Dex.
It felt so close already back then, sluggish, but still usable. But that initial implementation was running some in-house version of Ubuntu with a custom kernel (if I remembered it correctly).
I just wish this becomes a reality much sooner then later. Especially if I can have my dev environment on some remote VPS with either tunneling, github code spaces or Azure DevBox
Just FYI, Dex is really fluid on flagship devices.
Reasonably fluid, but not when it comes to heavy web pages with a lot of 3D. I have an S24U I use in DeX for most of my day but when I do have to switch to my ten inch 6800u laptop it absolutely demolishes the DeX experience. There's still a fractional second of lag that Samsung hasn't done away with yet.
I think it was introduced with at least the S9+, mine has had DeX since I got it originally.
I have it on my old S8
This is the right answer. DeX itself was introduced with the S8 series.
With the S9 they introduced the developer test version of Linux on DeX but it never came to the S8 or S10 and it was already discontinued with the Android 10 update :(
It's not a full laptop replacement, but at least for me it's good enough at what it does that I can just take my phone or tablet with me on short vacations and not be paranoid that I'm gonna have to do something complicated like log into my bank or write some verbose emails that I'm normally afraid to do from my phone. In those instances, plugging one of them into a KVM and Dex mode is sufficient to get over the hump.
Last I used it, I still wouldn't want to write code on Dex. But it was great for everything else. I could definitely complete just about any other tasks I needed with it. It was a little clunky, but doable; teams calls, getting into internal tools for triaging systems issues, the company CRM, all that stuff.
Rumor is Samsung won't support Google's Linux Terminal (at least for their existing phones) since their Knox conflicts with the Android Virtualization Framework :-(.
Honestly I'd like to see Windows 11 running under this as well, but that seems incredibly unlikely.
It's interesting to hear because Samsung had a Linux feature previously: https://developer.samsung.com/sdp/blog/en/2017/10/18/samsung...
They had Linux on DeX in 2018, killed in 2019. It was a partnership with Canonical
https://9to5google.com/2018/11/09/samsung-linux-on-dex-andro...
It was the Ubuntu 16.04 desktop running in a LXD container. It crashed when the tablet went in out of memory, so I had to be careful with what I was running.
Maybe it's possible anyways? Qualcomm was able to integrate their own hypervisor on top of AVF
Linux Plumbers Conference 2025 | Adding Third-Party Hypervisor to Android Virtualization Framework
https://lpc.events/event/17/contributions/1447/attachments/1... https://youtu.be/hLdUCrlheKg
When I tried the external display mode on my Pixel 8a, I did some development with a bluetooth keyboard, bluetooth trackball and vscode tunneling into my desktop.
So the development wasn't local, but it was sort-of usable. (And the editing is local in any case.)
What do you mean by tunneling here; remote desktop or does vscode run on the 8a?
VSCode runs in Chrome on the Pixel 8a. But it connects to a remote VSCode server via a VSCode tunnel where eg your compiler runs. See https://code.visualstudio.com/docs/remote/tunnels
I had no idea the vscode tunneling stuff worked in the browser. I thought it was SSH. Do they have some sort of WebSocket proxy? Do you have a link to how to set this up?
They offer both ssh and their own tunnel protocol.
So you start the 'server' on eg your desktop, and that registers with eg GitHub or Microsoft (or perhaps another service, not sure how open the system is), and then you can use any other computer to connect to your system via GitHub or Microsoft (as a proxy, I think). The other computer can either run just a browser, or can run a vscode (which is basically also a browser in the end).
See https://code.visualstudio.com/docs/remote/tunnels
Yes, the nice thing about the tunnels is that the computer you want to develop on doesn't have to be reachable from the internet. It only has to be able to reach the internet. GitHub (or Microsoft) play the man-in-the-middle.
It's really convenient. I often use it to develop from my laptop on my desktop, even when they are on the same local network: because it's basically just as fast, but I don't have to worry about which network I'm on, it just always works (as long as I have Internet access on both machines. But if that ever stops, I'm not really going to develop much anyway.)
I don't know. Google is always building lots of stuff and most of it gets shelved before it ever sees the light of day, and 75% of what does get released gets shuttered within 5 years.
The reality is if it isn't ads or ads adjacent, Google will lose interest. And based on their historical revenue I suppose they ought to continue with this model.
Google needs a widely used platform for AI integration into every computing task, based on interactions with and data on that device. Their best bet is to expand the reach of Android into traditional desktop tasks.
Android already made lots of progress on multi screens and adaptive layouts, and there is now a new developer center with guides for what they call productivity apps.
Not to mention, more people than we realize are on their phones. For those of us who use both a phone and computer, it is VERY easy to overlook.
For example, my wife, she is primarily on her phone as a computing device. Only recently after buying a Mac Mini and a Cricut is she back to using a standard computer. She might borrow my laptop for online shopping just so she can open 50 windows and 80 tabs to consume all available memory on my Macbook Air, but that's probably because Safari on iOS has sane tab caps.
I also know that games predominantly for PC / Web have become predominantly mobile over the years. There's a reason Roblox plays on your phone and tablet. You might not have the specs for a gaming machine, but your iPhone / iPad / Android definitely do.
that's not their best bet, their best bet is Gemini integration with all Google Workspace apps and Gemini eating Google search progressively
I feel you on what you're saying, but Google's Chromebook business is _big_ (11.5 Billion in revenue 2024) and this seems like a way to pull together that with their Android development.
I wish they'd open-source what they're shuttering. Would be a win-win as far as I can tell.
How is it a win for Google to release something open-source that had potentially cost them lots of money? Even if they don't need and pursue it anymore, why would they just give it to the competition? It's always easily said to "just open-source" it but Google is a business and owes outside software developers nothing.
How can another company compete with a product Google no longer offers? There is no competition because Google quit competing.
If Google spins up a project and then abandons it, how could they possibly be harmed by someone else offering a comparable product? Google has already accepted a total loss on the product, there's really nothing for them to lose here.
What benefit do they see in exchange for the effort in open sourcing things?
It's certainly a win for the rest of us, but how does Google benefit to make it a "win-win", and not just a "win"?
> What benefit do they see in exchange for the effort in open sourcing things?
Goodwill and more people willing to try whatever they release next, rather than the current situation of “Oh, Google is releasing a new thing? Pass. They’ll just stop supporting it and I’ll be left in the cold anyway, so no bother even trying”.
Killing so many projects makes fewer people interested in trying whatever they release next, which means fewer users, which means a higher likelihood it’ll be abandoned. It’s a vicious cycle that could be stopped or even reversed if they open-sourced their abandoned stuff.
To be clear, I’m not necessarily advocating Google should do it or that it’s be a clear win with no downsides. Maybe the upside wouldn’t be worth it, but there is an upside.
I like and agree with your "open source as 'abandonment insurance'" angle here ...
> Goodwill and more people willing to try whatever they release next
When's the last time your (pick your favorite non-technical) relative cared if the product they were trying was open-source?
My point has nothing to do with licensing, but longevity.
What non-technical users know is “Google released a project, I invested my time in it, they abandoned it, and I was left hanging. This has happened multiple times so I no longer want to try anything new they release”.
Had the projects been open-sourced, at least some of them would have been picked up by others and continued so non-technical users would know “Google released a project, I invested my time in it, they abandoned it, then someone continued it and I’m still using it to this day. I’m happy to try this new Google thing, because even if they abandon it I won’t be left in the cold”.
> What benefit do they see in exchange for the effort in open sourcing things?
Next (good) thing they build will probably have greater adoption, due to less fear of "they'll kill this in two years anyway".
It's a win, because people will not fear Google shuttering their experiments, and thus will be more likely to use them. It's also a win, in that it furthers a common good: if Google abandons a venture, why would they be upset if someone picks it up and succeeds? It's also a win, in that it boosts the open-source community (or industry, whatever you want to call it), which is also a win-win. If you want to by cynical, it would also be a win in that you could spin a narrative about how Google's monopoly-fueled profits trickle-down via open-source projects and thus unregulated capitalism works.
If they did it would probably have to be rewritten as it probably depends on a ton of internal google systems.
You're right. I guess this illustrates a downside of closed-source and walled-gardens.
> The reality is if it isn't ads or ads adjacent, Google will lose interest.
Or unless it is a tool they need, like Gerrit.
If you haven't tried it, especially if your workplace allows your phone to have access to some corporate data, DeX + a good pair of AR or just integrated display glasses feels like the future.
I run my S23 Ultra with a pair of XReal One's, and a folding Bluetooth keyboard (DeX let's you use your phone as a touchpad). It is really amazing in widescreen mode sitting in a coffee shop, reading through technical documents and answering work email. When I'm done, it can all fold up and fit in a (spacious) pair of cargo shorts.
I think Samsung has played the long game on DeX, with an eye towards their collaborative XR glasses with Google next year. As great as XReal has been, I am eager to see a "first-party" solution.
I tried it for a while with the best AR glasses I could find at the time, XReal Air 2 Pros with an Xreal Beam, and although I could see the potential, it wasn't good enough to get work done. The screen size was too small, the resolution too poor, and it was a little too jittery and unnatural feeling.
Are the Xreal One's that much of a step forward that you can use it for serious work? Even on my Quest Pro I find it just on the edge of being too annoying to do coding-work. Web browsing is decent.
And second question, worth buying the One or waiting for the One Pros?
Xreal One removed the biggest problems with that tech, it's usable now. No more "jittery and unnatural feeling" or stupid dongles/pucks. They put custom silicon in the glasses which stabilizes things and optionally locks displays in space.
It's not perfect but usable.
I'd say take another look. The beam has a LOT of issues. The One basically says "give me a signal, I'll project it in 2D and track it with 3DoF." Its smooth, and while it can drift a little (it is only an accelerometer), it is stable for me.
I wear glasses with mine, yet I still find it surprisingly crisp for text in ultra-wide mode. I'd say it is a fairly unobtrusive experience. It also helps that the nose pads don't dig into my skin.
That said, if a Quest Pro isn't good enough, I hesitate to recommend it. The FOV is certainly smaller on the One.
Thanks. If you have experience with the Quest Pro would you say the text clarity is a step up with the Ones? Supposedly the One Pros will be even better, and are coming out soon.
Yep, I do this too. It works well. I rather would have a Linux Desktop but for now I can get all my work done like this.
> Linux Desktop
This is exactly what Librem 5 phone offers. (My daily driver.)
How's the battery life? Have you tried it with XR glasses?
Didn't try the glasses. Concerning the battery, see these:
https://forums.puri.sm/t/nine-months-librem-5-as-my-only-pho...
https://forums.puri.sm/t/a-l5-review-1-week-to-my-ready-to-s...
I'm extremely interested in this use case. I can imagine a future where your employer ships a "company headset" and peripherals rather than a laptop.
Why don't we have virtual offices to wander around yet?
> Why don't we have virtual offices to wander around yet?
I worked at a place that used one.
Because the actually functionality they provide is the same as Slack, but worse in basically every way, is maybe why.
This is the problem. VR/AR can add value but you really have to tailor the experience to it. And it has to be a suitable usecase.
If you just lift over what you have in 2D it becomes only more painful. But this is what most people do. Also many platforms, like Microsoft Mesh. Yes, it's cool that you can join a teams meeting in VR. But until they add something that actually takes advantage of being in VR, all it does is add more friction. Roasting marshmallows and other cutesy minigames does not add any value whatsoever.
I think there’s maybe a case for VR meeting rooms that you kinda teleport into, but anything beyond that is gonna be niche as hell and just a hindrance in every other case. A whole VR office space? Just gets in the way.
And I expect even a VR meeting space would see more use that’s worse than a normal video call but is happening because someone in charge is over the moon for it, than it’d see use in the far rarer cases where it’s really better.
Well, I've done extensive trialling at work during the pandemic (when flying often just wasn't an option at all!) and I do see added value for things like workshops.
Teams has breakout rooms but they are very rigid. You have to switch to one and define them. You can't 'glance over' and see what the other rooms are doing. It's much more flexible to just walk around in a 3D space, work on a shared whiteboard you are standing around, pull in some powerpoints to discuss, and walk over to another group if you're needed (you could see them wave over). At this point it really becomes a real alternative to flying over for a workshop. Thus saving many tonnes of CO2, and much cost in flights and hotels. VR is not quite as good but it's much better at dynamic workshops than a simple video tool like Teams is. Added bonus if you are discussing potential upcoming products that you already have 3D models of. Just picking up a model and going like "Hey why don't we put the USB port on this side", this is really where this shines.
But the tool has to be really good. Other solutions like Arthur, Viverse and Spatial could do it really well (Spatial has since gone full consumer-oriented though and has lost many capabilities for business, it's now more of a luxury VRChat). Mesh can not, it is extremely limited. It's the old AltSpaceVR but dumbed down. It would have been better if they kept AltSpace as it was without messing so much with it.
Speed of light limitations there is fundamental latency that will be noticed if you are not close enough. Many musicians are doing virtual jam sessions and 1000km is about the limit. Music is the most demanding application, depending on how your meeting is run some can handle a lot of meetings. Someone on Mars will forever be limited to just watching a presentation, someone on a different continent will need to raise their hand and be recognized before asking a question.
That was what SimulaVR was advertising on. Unfortunately it seems things are a lot more difficult than they anticipated and they still have not shipped any devices.
Same with the Immersed Visor. Also still vaporware. They had lots of journalists fly over for a demo that didn't actually work and all they did was show off hardware.
What bluetooth keyboard do you use? I thinking I want to try this out :)
If you want a really small one I've been happy with this:
https://www.amazon.com/iClever-Bluetooth-Keyboard-Foldable-S...
Wouldn't recommend for extended typing though.
Thanks. Do you only connect Dex + XReal + Keyboard with no mouse? I'm worried no mouse will be uncomfortable.
I actually don't have any sort of Dex/AR setup. Currently only have my phone's screen. Admittedly I've only tested it. Haven't actually done a coding session yet. So total typing time on that keyboard is minimal. So I guess all I can say is I'm happy with the build quality and design. The bluetooth switching between devices is pretty slick.
I tried this and battery goes down very quickly on the phone. Do you have a solution for this?
Any usb-c docking station should work. (should being key, many are lacking something useful - commonly the monitor port is a usb-A video interface with windows-only drivers not a display port that would just work).
Though I suspect a laptop is still what you want. Your phone will generate too much heat to leave in your pocket. Or maybe some backpack (fanny pack?) wearable?
powerbank?
The issue with this is usually that you can't have power from the powerbank going into the only USB-C socket on the phone while the display signal comes out on the same cable. I think it's technically doable, but not usually with dongles that would fit in your pocket.
I meant for a pocketable setup as parent explained. You wear the glasses in a coffeeshop connected to your mobile phone (the single USB-C port).
Based on some quick testing this consumes about 1% per minute on my S24 Ultra which makes this scenario unrealistic (at least for me)
I recently bought a second-hand Microsoft Surface tablet, installed Debian and now run GNOME on it. The first time it came up and I logged into a familiar GNOME environment was a profound experience. I was pretty sure what was going to happen, but it still took me by surprise.
So I don't think the convergence idea is necessarily bad. It's perhaps somewhat niche, and it's not easy to pull off.
I almost never use a phone, so for me the major selling point of my tablet is no Android oddities or second-rate citizen vibes. I don't need to wade through an app store to do simple things. I'm not depending on a hardware vendor where support stops a few years down the road. Plugin a keyboard and mouse, and it's just like any other computer with a really small screen. I already have a desktop computer, so it doesn't replace anything, but the familiarity is still nice.
The touch experience is not as polished as Android. It's fine for my purposes, though. I'm mostly using the tablet as a night-time reader for epubs - dark background, light level at minimum, and then it works surprisingly well for when I wake up and need something to do before I can fall asleep again.
Was this a Surface RT (very old ARM - Nvidia Tegra), Surface Pro (Intel), or Surface X (ARM - SQ1 / SQ2)?
this done well is a transformational thing, its just no one has been willing to invest yet, but the compute on a phone is now good enough to do most things most users do on desktop.
I can easily see the future of personal computing being a mobile device with peripherals that use its compute and cloud for anything serious. be that airpods, glasses, watches, or just hooking that device up to a larger screen.
theres not a great reason for an individual to own processing power in a desktop, laptop, phone, and glasses when most are idle while using the others.
The future of personal computing is being dictated by the economics of it, which are that the optimal route to extract value from consumers is to have walled-garden software systems gated by per-month subscription access and/or massive forced advertising. This leads to everything being in the cloud and only fairly thin clients running on user hardware. That gives the most control to the system owners and the least control to the user.
Given that all the compute and all the data is on the cloud, there is little point in making ways for users to do clever interconnect things with their local devices.
I've heard so many "The future of personal computing" statements that haven't come true, so I don't take much stock in them.
I remember when everyone thought we were going to throw out our desktops and do all our work on phones and tablets! (Someone who kept insisting on this finally admitted that they couldn't do a spreadsheet on a phone or tablet.)
> Given that all the compute and all the data is on the cloud, there is little point in making ways for users to do clever interconnect things with their local devices.
IMO, it's a pain-in-the-ass to manage multiple devices, so IMO, it's much easier to just plug my phone into a clamshell and have all my apps show up there.
> we were going to throw out our desktops and do all our work on phones and tablets! (Someone who kept insisting on this finally admitted that they couldn't do a spreadsheet on a phone or tablet.)
We're almost there. The cool kids are already using 12" touchscreen ARM devices that people from 10 or 20 years ago would probably think of as tablets. Some kinds of work benefit greatly from a keyboard, but that doesn't necessarily mean you want oneall the time - I still think the future is either 360-fold laptops with a good tablet mode (indeed that's the present for me, my main machine is a HP Envy) or something like the MS Surface line with their detachable "keyboard cover".
> Some kinds of work benefit greatly from a keyboard, but that doesn't necessarily mean you want oneall the time
I would say most kinds of work.
Even if you're just on teams discussions - a real keyboard is much more productive than messing around on a touchscreen. Same with just reading. Sometimes I read a forum thread on my phone and then when I get back to the real computer I'm surprised how little I read and how much it felt like.
The only thing where I don't see this being the case is creative work like drawing where a tablet is really perfect, much better than a wacom or something.
Well, the MacBook Air is pretty much an iPad that swapped its touchscreen for a keyboard (and trackpad).
> I still think the future is either 360-fold laptops with a good tablet mode (indeed that's the present for me, my main machine is a HP Envy) or something like the MS Surface line with their detachable "keyboard cover".
I think people still want to use different form factors in the future. There's different uses for a phone, a tablet, a laptop and a desktop.
I do agree that laptops might get better tablet modes, but if you want to have a full-sized comfortable-ish keyboard, the laptop is gonna be more unwieldy than a dedicated tablet.
The only thing you save from running your desktop (or even laptop) form factor off your phone is the processor (CPU, GPU, RAM). You still have to pay for everything else. But even today the cost of desktop processing components that can reach phone-like performance is almost a rounding error; just because they have so much more space, cooling and power to play with.
(Destop CPUs can be quite pricey if you buy higher end ones, but they'll outclass phones by comical amounts. Phone performance is really, really cheap in a desktop.)
> I think people still want to use different form factors in the future. There's different uses for a phone, a tablet, a laptop and a desktop.
> The only thing you save from running your desktop (or even laptop) form factor off your phone is the processor (CPU, GPU, RAM). You still have to pay for everything else.
Having used the same device as my tablet/laptop/desktop for a few years (previously a couple of generations of Surface Book, now the Envy, in both cases with a dock set up on my desk), I never want to go back. It just makes using it so much smoother, even compared to having tab sync and what have you between multiple devices. It's not a money thing, it's a convenience thing, which is why I think it'll win out in the end.
I think as hardware continues to get thinner and lighter, the advantage of a tablet-only device compared to a tablet/laptop will disappear, and as touchscreens get cheaper, there'll be little point in laptop-only devices. I definitely still want an easy way to take a keyboard with my device on the train/plane, and I don't know what exact hardware arrangement will win out for that, but I'm confident that the convergence will happen. I think phone convergence will also happen eventually, for the same reason, but how that will actually work in terms of the physical form factor is anyone's guess.
> Having used the same device as my tablet/laptop/desktop for a few years (previously a couple of generations of Surface Book, now the Envy, in both cases with a dock set up on my desk), I never want to go back. It just makes using it so much smoother, even compared to having tab sync and what have you between multiple devices. It's not a money thing, it's a convenience thing, which is why I think it'll win out in the end.
Yes, that's useful. But eg ChromeOS already gives you most of that, and a bit of software could get you all the way there.
> I think as hardware continues to get thinner and lighter, the advantage of a tablet-only device compared to a tablet/laptop will disappear, and as touchscreens get cheaper, there'll be little point in laptop-only devices.
I agree with the latter, but not the former. There are mechanical limits to shrinking a keyboard, and still preserve comfort.
(And once you have the extra space from a keyboard, you might as well fill it up with more battery. But I'm not so sure about that compared to the argument about physical lower bounds on keyboard size.)
> eg ChromeOS already gives you most of that, and a bit of software could get you all the way there.
I don't understand what you mean here. If you're talking about some kind of easy sync between devices software, people have been trying to make that work for decades, but they not haven't succeeded but haven't even really made any progress.
> There are mechanical limits to shrinking a keyboard, and still preserve comfort.
Maybe, but those limits are plenty big enough for a tablet - particularly with the size of phones these days, a tablet smaller than say 10" is pointless, and the keyboards on 11" laptops are fine. Now making a device that can work as both a phone and a laptop-with-keyboard will probably require some mechanical innovation, yes, but that's the sort of thing that I suspect will be figured out sooner or later, e.g. we're already seeing various types of folding phones going through the development process.
11" laptops are not fine to type on all day unless you give them huge bezels (even the 11" macbook which did have those huge bezels was space-constrained on the less important keys). Ergonomics is really important.
Sure it's fine to get by for an hour or two but spending 8 hours 5 days a week on one is a really bad idea and will provide a great path to crippling RSI. In fact using any laptop that much is a bad idea, due to the bad posture it provides (with the screen attached to the keyboard). This is why docking stations are still so important.