GestureLaser and GestureLaser Car: Development of an embodied space to support remote instruction

dc.contributor.authorYamazaki, Keiichi
dc.contributor.authorYamazaki, Akiko
dc.contributor.authorKuzuoka, Hideaki
dc.contributor.authorOyama, Shinya
dc.contributor.authorKato, Hiroshi
dc.contributor.authorSuzuki, Hideyuki
dc.contributor.authorMiki, Hiroyuki
dc.date.accessioned2017-04-15T11:51:10Z
dc.date.available2017-04-15T11:51:10Z
dc.date.issued1999
dc.description.abstractWhen designing systems that support remote instruction on physical tasks in the real world, one must consider four requirements- 1) participants must be able to take appropriate positions, 2) they must be able to see and show gestures, 3) they must be able to organize the arrangement of bodies and tools and gestural expression sequentially and interactively 4) the instructor must be able to give instructions to more than one operator at a time GestureLaser and GestureLaser Car are systems we have developed in an attempt to satisfy these requirements GestureLaser is a remote controlled laser pointer that allows an instructor to show gestural expressions referring to real world objects from a distance GestureLaser Car is a remote controlled vehicle on which the GestureLaser can be mounted. Experiments with this combination indicate that it satisfies the four requirements reasonably well and can be used effectively to give remote instruction. Following the comparison of the GestureLaser system with existing systems, some implications to the design of embodied spaces are described
dc.identifier.doi10.1007/0-306-47316-X_13
dc.identifier.isbn978-0-306-47316-6
dc.language.isoen
dc.publisherKluwer Academic Publishers, Dordrecht, The Netherlands
dc.relation.ispartofECSCW 1999: Proceedings of the Sixth European Conference on Computer Supported Cooperative Work
dc.relation.ispartofseriesECSCW
dc.titleGestureLaser and GestureLaser Car: Development of an embodied space to support remote instruction
dc.typeText
gi.citation.endPage258
gi.citation.startPage239
gi.citations.count5
gi.citations.elementTakeshi Tsujimura, Yoshihiro Minato, Kiyotaka Izumi (2013): Shape recognition of laser beam trace for human–robot interface, In: Pattern Recognition Letters 15(34), doi:10.1016/j.patrec.2013.03.023
gi.citations.elementRyotaro Kuriya, Takeshi Tsujimura, Kiyotaka Izumi (2015): Augmented reality robot navigation using infrared marker, In: 2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), doi:10.1109/roman.2015.7333607
gi.citations.elementNobuchika Sakata, Yuuki Takano, Shogo Nishida (2014): Remote Collaboration with Spatial AR Support, In: Lecture Notes in Computer Science, doi:10.1007/978-3-319-07230-2_15
gi.citations.elementNobuchika Sakata, Tomoyuki Kobayashi, Shogo Nishida (2013): Communication Analysis of Remote Collaboration System with Arm Scaling Function, In: Lecture Notes in Computer Science, doi:10.1007/978-3-642-39330-3_40
gi.citations.elementTakeshi Tsujimura, Kiyotaka Izumi (2016): Active spatial interface projecting luminescent augmented reality marker, In: 2016 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), doi:10.1109/mfi.2016.7849460
gi.conference.date12–16 September 1999
gi.conference.locationCopenhagen, Denmark
gi.conference.sessiontitleFull Papers

Files

Original bundle

1 - 1 of 1
Loading...
Thumbnail Image
Name:
00141.pdf
Size:
1.1 MB
Format:
Adobe Portable Document Format

License bundle

1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
0 B
Format:
Item-specific license agreed upon to submission
Description: