Projects – Angelos Barmpoutis https://abarmpou.github.io/angelos Professor of Digital Arts and Sciences Mon, 19 Feb 2024 19:26:02 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.5 Automated analysis of human motion using AI https://abarmpou.github.io/angelos/page/automated-analysis-of-human-motion-using-ai/ Sat, 15 Feb 2020 03:59:48 +0000 https://research.dwi.ufl.edu/projects/angelos/?post_type=product&p=306 Human movement has been studied in multiple disciplines, including health sciences (biomechanics, kinesiology, neurology, sports medicine) and the Arts (theater and dance, cultural studies) as well as their intersection (arts in medicine, dance therapy), resulting in a large but disparate assortment of multi-modal datasets, including video, skeletal motion capture, manual annotations, and clinical metadata.

Traditional data collection processes often include Laban movement analysis, a standardized form of human movement annotation that parameterizes the observed motion changes in a pre-defined 4-dimensional feature space (effort, space, shape, body timing/phrasing). Such analysis requires lengthy manual input from professionals who annotate the recorded data through a time-consuming “watch and pause” process, which is also prone to human errors.

In this project, we use AI to fully automate the annotation process involved in Laban analysis by training a Bayesian-layered (in the temporal domain) Convolutional Neural Network (CNN) through supervised learning on existing human motion datasets of video and skeletal sequences.

The trained model is beeing tested in Laban-annotating existing video and skeletal sequences and validated by arts in medicine practitioners and experts in Laban analysis. This AI-driven project will have a significant impact as it will enable automated classification and understanding of human motion across a spectrum of movement-centered disciplines, including clinical and telehealth settings, orthopedic centers, choreographic practice, and cross-cultural movement analysis.

]]>
Passive Haptics in Virtual Reality https://abarmpou.github.io/angelos/page/passive-haptics-in-virtual-reality/ Sat, 15 Feb 2020 03:28:56 +0000 https://research.dwi.ufl.edu/projects/angelos/?post_type=product&p=304 We have been running various user studies at UF Reality Lab that investigate the role of tangible objects in virtual reality. A series of publications co-authored by our graduate students in the Digital Arts & Sciences program have been assessing the influence of low-cost passive haptic interfaces on the user’s perception and overall experience within virtual environments.

How do passive haptics affect the user’s perception of physical properties, such as weight and size, in Virtual Reality? (HCII 2024)

How does interaction with physical objects within virtual environments affect knowledge acquisition and recall? (HCII 2024)

How do low-cost passive haptics affect the user experience while exercising? (HCII 2020)

]]>
Word Work Mat App for literacy instruction https://abarmpou.github.io/angelos/page/word-work-mat-app-for-literacy-instruction/ Thu, 02 Jan 2020 18:30:26 +0000 https://research.dwi.ufl.edu/people/angelos/?post_type=product&p=222

During the COVID-19 pandemic, education has been severely impacted across the globe.
Ongoing class sessions are especially important for K-12 students, many of whom benefit
especially from experiential learning that is not typically offered in ad hoc online settings. UF
Digital Worlds Institute professor Angelos Barmpoutis is working in partnership with the UF
Literacy Institute (UFLI) to address this world-wide need with an innovative virtual platform
solution.

The Virtual Word Work Mat (VWWM) is an interactive app designed for literacy instruction,
based on the original physical classroom version of the Word Work Mat created by UF doctoral
student Valentina Contesse. Valentina stated, “I never imagined when I was making Word Work
Mats for my first graders, using file folders and Velcro, that it could be transformed into a
digital tool that teachers all over the world would use in their classrooms!”

According to Facebook analytics, just two days after its release date the VWWM had already
reached more than 70,000 people, with more than 6,000 engagements and 300 shares. And
UFLI’s online engagement increased by more than 1000 subscriptions during the same period.
Creating and offering ready access to the Virtual Word Work Mat during pandemic lockdown
has empowered teachers and students continue their literacy instruction as part of their on-line
learning activities. Designed to work on tablet and other devices using the either iOS or Android
platforms, VWWM provides a simple user interface in which students can manipulate letter and
phoneme cards with intuitive touch gestures and compose words at home.

Holly Lane, Director of UFLI said, “We’re so excited about the partnership between the UF
Literacy Institute and Digital Worlds in our response to the pandemic. Thanks to the
commitment and technical expertise of Angelos Barmpoutis, we were able to take some our
interactive literacy instruction materials and bring them to life on a virtual platform. The
response from teachers has been overwhelmingly positive. We have many thousands of
teachers accessing the materials and sharing the links on social media.”

This excitement is shared by Digital Worlds Director James Oliverio. “One of the great benefits
of experiential online learning is accessibility across the traditional challenges of demographics,
geography, and time zones. This project is an example of the interdisciplinary strengths of the
University of Florida; faculty stepping up in a time of need to provide tangible benefits from the
ongoing research and development happening across our campus.”

UFLI Director Lane also stated, “Together, UFLI and DW are making a difference for teachers
and their students. We hope this is just the beginning of our collaboration!”

Sample Comments from Parents and Teachers:

“I used this for the first time yesterday in my virtual reading lesson with my struggling readers.
Their response was priceless. They were so engaged and actively asking me to change the
letters and make new words”

“This has been amazing for my daughter with a severe Auditory Processing Disorder and
dysgraphia. Thank you!!!”

“This is amazing— thank you! Can’t wait to share with my teachers!”

“This is immeasurably helpful!!!!”

Press Release by the UF College of the Arts:

https://arts.ufl.edu/in-the-loop/news/digital-worlds-institute-researcher-creates-experiential-online-learning-app/

Links to the App:

Try the Virtual Word Work Mats below:

Beginner Word Work Mat
Direct link: Beginner Word Work Mat
Drag and drop the letter cards below:

Intermediate Word Work Mat
Direct link: Intermediate Word Work Mat.
Drag and drop the letter cards below:

]]>
Programming for Humanists https://abarmpou.github.io/angelos/page/programming-for-humanists/ Wed, 26 Jun 2019 20:01:45 +0000 https://research.dwi.ufl.edu/people/angelos/?post_type=product&p=239

The purpose of this project is to introduce computer programming in humanities curricula by establishing a correspondence between natural languages and computer languages. The exercises discussed here show how you can transcribe into code: 1) a theatrical play in English (Shakespeare’s Romeo and Juliet), 2) the common notions in Ancient Greek from Euclide’s Elements Book 1, and 3) calculate the discrepancy between Julian and Gregorian calendar using Pope Gregory’s XIII documents in Latin.

The framework uses a tool that instantly presents computer code into natural language in English, Spanish (Español), German (Deutsch), Italian (Italiano), Greek (Ελληνικά), Turkish (Türkçe) etc. as well as ancient languages such as Ancient Greek (Ἑλληνιστὶ) and Latin (Lingua Latina).

The scientific content of this presentation can be found in this article:
Barmpoutis, A., 2018. Learning Programming Languages as Shortcuts to Natural Language Token Replacements. Proceedings of the 18th Koli Calling International Conference on Computing Education Research, pp. 1-10. Download PDF

]]>
Imagineering the Technosphere https://abarmpou.github.io/angelos/page/imagineering-the-technosphere/ Sun, 19 Aug 2018 18:51:58 +0000 https://research.dwi.ufl.edu/people/angelos/?post_type=product&p=51

The Imagineering and the Technosphere is a UF Intersections project funded by the Andrew W. Mellon Foundation.

In the face of our growing technological dependencies, our intersections group explores how humans use technology intentionally and unintentionally to alter our physical world. The group will study the accelerating pace of social technologies, such as the Internet and Artificial Intelligence.

The group aims to discover what the lessons of past inventions can teach us about how to address the problems facing humanity today, particularly as they emerge in the “technosphere,” the landscape shaped by human hands. The group will develop an interactive website, host a regular research workshop, and organize events with speakers and filmmakers about technoscience. To engage students with questions of space, place, and time, a faculty member will work with students to build a mobile app for time travel in augmented reality to examine the hidden role of technology on the UF campus. This work will lay the foundation for team-taught and other new courses that will give students the tools to envision how they will “imagineer” the future of the planet while harnessing the power of technologies in environmentally and socially sustainable way.

In Spring 2020 our team will offer a course titled “Imagineernig the Technosphere”. The purpose of this course is to respond to the grand challenge question: “How do technologies influence our lives, then and now?” from the perspectives of our 6 thematic units: 1) Inventions and Sciences, 2) Spaces and Infrastructure, 3) Past and Future, 4) Imagining and Designing, 5) Conservation and Sustainability, 6) Culture and Society. This interdisciplinary approach will equip the students with foundational knowledge and tangible skills through weekly modules and experiential learning activities that will be organized as part of the “UF Quest Game”, a gamified learning experience specially designed for this course. The students will be able to transcend the boundaries of traditional disciplines and demonstrate how the humanities serve as the foundation for understanding science and technology and how this holistic approach could affect our decision making processes in ourselves, and on a planetary scale.

In the humanities class “Imagineering the Technosphere,” homework isn’t based on a book chapter, but an adventure through campus guided by the GPS-powered Time Traveler app.

Sisters Christine and Reyna Mae Cuales, both taking the class this semester, followed the prompts on the app, which steered them closer to their destination. So far, they’ve visited the Harn Museum of Art, the McKnight Brain Institute, the Baughman Center on Lake Alice and the Digital Worlds Institute in Norman Hall, among others. Today, they’re closing in on a location near the historic central campus. When students successfully navigate to the mystery location using the app, a screen pops up that tells them they’ve arrived, offers some background about the place, and poses a reflection question about the place and its use over time.

Professor Angelos Barmpoutis says the intention of the app — and its corresponding board game — is to get students to see their surroundings in a new way.

“These places are deeply connected to the past and tied to the future,” he said. “I’m trying to get them to think about the things they pass every day.”

When students followed the app to the Norman Gym, for example, they saw a facility originally used for basketball, its wood floors still visible, hosting a weekend-long video game design competition. The experience gave them an opportunity to reflect on how not only the space but the nature of sports and competition evolved, Barmpoutis said.

Barmpoutis is one of seven professors who team teach the class, tackling fields that range from anthropology to historic preservation. Each professor’s lesson includes the hunt for several of the game’s 22 3D printed pieces, which students collect after finding the location and submitting video responses to the questions posed in the app.

Walking north on Buckman Drive, the Cuales sisters can see that they’re getting closer to today’s location. Then they cross the street, and app tells them they’ve arrived. It’s Dauer Hall, an Collegiate Gothic brick building from 1936 with arches, bay windows and stained glass that once served as the student union. They record their video response about connection and continuity in historic places, then check the app for their next destination.

Reyna Mae, a pre-health student, and Christine, who’s studying sustainability and the built environment, both say they have discovered academic interests they wouldn’t have known about without the course.

“When you’re just focused on your major, you don’t get to explore other classes,” Christine said. “Getting to know all of the professors and fields in this class opens up your eyes.”

]]>
3D Scanning the Rosetta Stone https://abarmpou.github.io/angelos/page/3d-scanning-the-rosetta-stone/ Thu, 28 Jun 2018 18:57:12 +0000 https://research.dwi.ufl.edu/people/angelos/?post_type=product&p=53 Use 1-finger and 2-finger gestures to move, rotate, and zoom the 3D model of the Rosetta Stone below:

With the permission of the British Museum, an interdisciplinary team from the University of Florida and the University of Leipzig scanned the Rosetta Stone in June 2018 to generate a high-resolution 2D and 3D map of its inscribed surface. In our setup, we used a single DSLR camera (Nikon D3400), which was fixed on a tripod in front of the stone, and calibrated as follows: exposure time = 5 sec., ISO speed = ISO-100, F-stop = f/25, focal length = 135mm, and max aperture = 4.5. To reconstruct the tridimensional inscribed surface using the shape-from-shading method, we controlled the lighting of the stone using a handheld light wand (Ice Light) that served as a 15-inch long light source of 1600 lumen at 5600k color temperature.

We divided the artifact in 8 regions (4 rows and 2 columns), which were photographed individually at 6000 x 4000 pixel resolution. Each region was photographed in 4 different lighting directions (light from the left, top, right, bottom) by placing the light wand in the corresponding side of the region of interest. This quadri-directional lighting configuration allowed us to capture information related to the local orientation at each point of the surface through the differences of the light reflection observed in the corresponding four photographs. The entire scanning session, including opening the glass case of the artifact, setting up the equipment, digitizing the artifact, and putting everything in its original configuration before the opening of the museum took us 120 min.

During this time 32 photographs were taken in total (8 regions x 4 lighting conditions), which were then processed to compose high-resolution 2D and 3D representations of the surface with 0.08141mm sampling frequency, which is equivalent to 312 DPI resolution. The tridimensional details of the inscribed surface were captured in the depth map, which was computed by processing the four corresponding images of the same region of interest illuminated with four different lighting orientations using the method by A. Barmpoutis, E. Bozia, and R. Wagman published in the Journal of Machine Vision and Applications 21(6) in 2010. The depth map contains detailed three-dimensional information of the inscribed surface so that it can be visualized in 3D. The 3D reconstructed surface can be rendered as an interactive 3D model that can be manipulated by the user (move, scale, rotate) and can be inspected under different virtual lighting orientations and shading methods.

Finally, in addition to the 3D reconstruction of the inscribed surface, we used a hand-held laser scanner (Structure Sensor by Occipital) mounted on a tablet computer (iPad Air by Apple) in order to create a 3D model of the entire stone. Although the 3D model generated by this scanner can depict the overall shape of the entire artifact, it does not have enough resolution to capture the fine details of the inscribed surface. Therefore, the 3D reconstructed surface using shape-from-shading is complementary to the laser-scanned 3D model, as both of these forms can co-exist in order to depict different structural details of the artifact.

The result of this process is a high resolution 3D representation of the Rosetta Stone that is available on-line as an interactive web app and can be accessed through the project’s website.

In November 2019, the project was featured in German news outlets: https://www.mdr.de/wissen/stein-von-rosette-digital-leipzig-100.html

]]>
Java For Kinect (J4K) https://abarmpou.github.io/angelos/page/java-for-kinect-j4k/ Fri, 20 Dec 2013 19:02:55 +0000 https://research.dwi.ufl.edu/people/angelos/?post_type=product&p=55

The J4K library is a popular open source Java library that implements a Java binding for the Microsoft’s Kinect SDK. It communicates with a native Windows library, which handles the depth, color, infrared, and skeleton streams of the Kinect using the Java Native Interface (JNI).

The J4K library is compatible with all kinect devices (Kinect for Windows, Kinect for XBOX, new Kinect, or Kinect 2) and allows you to control multiple sensors of any type from a single application, as long as your system capabilities permit. For example you can control three Kinect 1 sensors, or one Kinect 1 and one Kinect 2 connected via USB 3.0 to the same computer. Furthermore, the J4K library contains several convenient Java classes that convert the packed depth frames, skeleton frames, and color frames received by a Kinect sensor into easy-to-use Java objects.

]]>
Digital Epigraphy and Archaeology Project https://abarmpou.github.io/angelos/page/digital-epigraphy-and-archaeology-project/ Thu, 28 Feb 2013 15:22:12 +0000 https://research.dwi.ufl.edu/people/angelos/?post_type=product&p=38 Read More]]>

DEA is an interdisciplinary project initiated by scientists from the Digital Worlds Institute and the Department of Classics at the University of Florida. The goal of the project is to develop new open-access scientific tools for the Humanities and apply concepts from digital and interactive media and computer science to Archaeology and Classics. In our web-site you can view our 3D collections and interact with our on-line exhibits, read about our recent results, find interactive demos of our projects, and learn more about our future research directions.

Bringing together Digital Media, Computer Science, and the Humanities.

]]>
Exciting Kids to Walk https://abarmpou.github.io/angelos/page/exciting-kids-to-walk/ Sun, 11 Mar 2012 19:13:52 +0000 https://research.dwi.ufl.edu/people/angelos/?post_type=product&p=57

The UF Clinical and Translation Sciences Institute Funds Interactive Rehabilitation Environment

The UF Clinical and Translation Sciences Institute awards a $7,500 grant to support a pilot project for promoting walking recovery and enhancing the sensory input in kids with spinal cord injuries. Digital Worlds Professor Angelos Barmpoutis is participating as a co-investigator in this project along with other professors from the UF departments of Neuroscience and Physical Therapy. The goal of this collaborative team, lead by Dr. Emily J. Fox, is the development of a game-type environment to motivate disabled kids to walk.

The incorporation of interactive games and virtual reality (VR) is an innovative approach for making rehabilitation more engaging. Game technology motivates children, promotes practice, and performance of specific motor skills. Although games have demonstrated therapeutic effects when applied to children with neurological injuries, most games are not designed with consideration to motor impairments or for use in the LT environment. Therefore, the long-term objective in this project is to develop interactive gaming technology for the advancement of locomotor training interventions for children with neurological injuries. This project is the first phase in meeting this objective and will result in the development of a technical game prototype. A collaborative multidisciplinary team has been formed with expertise in basic neuroscience, rehabilitation, computer science, and game development. Focus groups of 8 children with SCI and CP, along with their caregivers and clinicians (Physical Therapists, Physiatrists) will be formed. Feedback from these groups will be incorporated into the team’s development of a game-design document. Using an iterative game development approach, a game software prototype will be developed and optimized for use with LT. Recently- released PrimeSense™ technology that allows for interactive controller-free play will be used and interfaced with the game prototype. Development of this game prototype and pilot data from its use will lead to a competitive NIH grant application. Moreover, the combined application of basic science, rehabilitation, and game-technology has a high likelihood of enhancing walking rehabilitation approaches for children with neurological injuries.

]]>