Focus is the main thing that differs TV from mobile as we don't have touch support on TV. In this article I'm presenting a comprehensive solution to focus management with Jetpack Compose on TV.
This is a nice experiment, however I can see lots of issues with this approach as it is not tied to the system focus in compose.
This means when you move the focused index to a button for example, the click handler of the button does not work as it does not have "real" system focus.
Also I am not a fan of the fact that the viewmodel needs to know what the UI looks like, this makes it impossible to use the same viewmodel across different screens that are closey related to each other. What happens if you had a filter for example, that should open the same screen with a filter bar, then now that filter bar cannot receive focus as the viewmodel does not know it, thus having to make the viewmodel know all the different cases it could be used in.
It also makes it impossible to use this in a project with multiple app modules sharing common viewmodels for example, like a project with a mobile app and a tv app, or even a WearOS app.
So it is a nice try, but I do not think the downsides of this approach is any better than using the system focus in the somewhat quirky way it currently needs to be used in.
VM not necessarily should be responsible for the focus calculations. This is what works for me, what works for you may be different. I think it's possible to extract everything related to the focus to just another layer of UI. Maybe even choose between default/focus-as-a-state implementations depending on the circumstances.
Some good ideas here! However I guess accessibility and TalkBack support would need to be added in somehow as well. And would this work if the user used a mouse as input and clicked around on the elements...?
This is a nice experiment, however I can see lots of issues with this approach as it is not tied to the system focus in compose.
This means when you move the focused index to a button for example, the click handler of the button does not work as it does not have "real" system focus.
Also I am not a fan of the fact that the viewmodel needs to know what the UI looks like, this makes it impossible to use the same viewmodel across different screens that are closey related to each other. What happens if you had a filter for example, that should open the same screen with a filter bar, then now that filter bar cannot receive focus as the viewmodel does not know it, thus having to make the viewmodel know all the different cases it could be used in.
It also makes it impossible to use this in a project with multiple app modules sharing common viewmodels for example, like a project with a mobile app and a tv app, or even a WearOS app.
So it is a nice try, but I do not think the downsides of this approach is any better than using the system focus in the somewhat quirky way it currently needs to be used in.
Good thoughts.
VM not necessarily should be responsible for the focus calculations. This is what works for me, what works for you may be different. I think it's possible to extract everything related to the focus to just another layer of UI. Maybe even choose between default/focus-as-a-state implementations depending on the circumstances.
Some good ideas here! However I guess accessibility and TalkBack support would need to be added in somehow as well. And would this work if the user used a mouse as input and clicked around on the elements...?
Hi Oscar. This doesn't wok with the touch, dpad only. Great observation regarding the accessibility 👍