Can you tell us more about this UX necessity?trevix wrote: ↑Mon Jan 30, 2023 4:51 pmMy standalone on mobile, connects with sockets to a remote standalone (on mobile too).
The mobile A shows a screenShot (received from B) and should allow the user to do a 1Click, a double click and a hold, that get transmitted to mobile B.
This has to be done with a single button on mobile A, for several reasons that regards the UX.
Scripting tools like LiveCode do a great job of providing high-level access to the underlying OS routines provided to support common interaction models. But both a scripting language and the OS APIs it relies on will sometimes be very challenging when attempting new interaction models.
In most UIs, we do sometimes see buttons behave differently in response to different input gestures, but I can't recall a case where a button changed its role.
For example, in the Finder a button can be clicked once to select, or double-clicked to invoke the action of opening it. But it's worth noting that indicating state change ("selected") isn't an action per se, merely preparing the icon for an action invoked elsewhere (the Open item in the File menu). The double-click is a shortcut for this select-then-act interaction, but selection by itself invokes no action.
I took the time to go through my archives to find two posts to the Use-LiveCode list that go into this in more detail, included in this earlier reply:
https://forums.livecode.com/viewtopic.p ... 969#p43125
All that said, sometimes there is a true need to invent new interaction models.
If we can learn more about yours we may be better empowered to find a with-the-grain solution to support it.