東京工業大学 環境・社会理工学院 融合理工学系. クロス研究室
Tokyo Institute of Technology, School of Environment and Society
Transdisciplinary Science and Engineering
Meet The Team
Edtech Group at the Cross Lab
Students in the Educational technology (Edtech) group at the Cross lab currently work on a variety of topics: metacognition, virtual reality (VR) assisted language learning, AI-assisted writing, personalized online learning platform development, elementary student programming education, life-long learning, and conversational chatbots in education. We are interested in understanding how educational technology can be combined with artificial intelligence, in particular, to improve student learning and assess the effectiveness of these new methods.
Luc Gougeon (LUC)
Luc is a third-year Canadian working adult doctoral student. His research focuses on educational policies and computational thinking. He is interested in understanding if Japanese in-service teachers are ready to teach programming in 2020. Luc has been living in Japan since 2008 and has been working as a university lecturer since 2015. Luc uses technology in his classes on a daily basis and hopes that his research will help him train the new generation of educators.
Promoting University Students and Elementary School Teachers to Become Lifelong Learners Through Play
In 2020, Japanese primary school educators will face the difficult challenge of introducing programming in their classes despite the fact that they never studied programming themselves. Our research aims are mapping the specific contours of the knowledge gap in-service teachers and extend this surveying to current universities students who are also lacking computer literacy skills. Most research in the field of computer literacy places a strong emphasis on children while neglecting the needs of in-service educators and older students. We will tackle this research by both surveying a range of students and teachers while conducting case studies consisting of an education intervention meant to give university students a quick grasp of computational thinking, computer literacy and basic programming concepts. The case study approach intends to offer students essentials skills in an active learning environment, skills which will be transferable to their future workplace or classroom if they intend to become educators. The results of this study are intended to offer stakeholders and policy-makers a clearer picture of the current educational landscape and enlighten their decisions. Below is an illustration of summarizing the issues which will be investigated related to education approaches and students’ knowledge needs.
Robert Anthony Olexa (Tony)
Robert Anthony Olexa is conducting research on Japanese students studying English as a foreign language (EFL) in tertiary educational settings funded by a JSPS Kakenhi grant. The research focuses on how students use iconic gestures and embodied communication to acquire English in virtual environments. The compilation of an ongoing Virtual Reality (VR) Chat language learner corpus cross-referenced with video data and multimodal analysis is used to observe how embodied learning contributes to students’ EFL learning progress.
Embodiment and Iconicity for English as a Foreign Language Learning in Virtual Reality
Iconicity is a term used to describe communicative elements that closely resemble their referents. A degree of iconicity when communicating between caregiver and learner has been recognized as necessary for first language acquisition. Also, the usefulness of iconic gestures has been intuited by educators for second language acquisition as evidenced through the broader educational approach of “Active Learning,” and more concentrated EFL approaches such as Total Physical Response. However, the limitations are known, and the Japanese EFL setting remains situated in the classroom. At current, the learning experience is delivered mainly through passive activities.
Recent advancements in commercial VR technology have allowed for 6 degrees of freedom of movement (see below). Participants can move around in virtual environments with increased space and movement, allowing for embodied communication and iconic gestures. The liberation from a traditional classroom environment can improve EFL teaching and learning in Japan as a whole. Also, the findings may point to needed areas of improvement for software developers and designers of extended reality devices.
Do Tien Dung (Bryan)
GSEP B4 student
Automated Essay Grading to assess Comprehension in Mechanical Engineering Students' Constructive Responses with BERT Pretrained Model and Annotation Techniques
Machine learning has recently garnered significant attention for its ability to efficiently process tasks and surpass human performance levels. This has led to an increased interest in applying state-of-the-art machine learning models in the field of education. By incorporating machine learning assistance, hopefully teachers can reduce their workload and focus on creating a more conducive learning environment for students. While automatic essay grading (AEG) has been studied extensively in the past, most of these studies have focused on evaluating general English essays instead of concentrating on a particular field. This study aims to develop an automatic essay grading model specifically tailored to evaluate Constructive Response Questions in the field of Mechanical Engineering. A dummy set of answers will be manually generated by paraphrasing, reordering, and removing some text from sample answers based on scoring guidance. Finally, a Neural Network Grading Engine (grading model) will be built and trained to assess comprehension taking advantage of BERT, BERT variants models and annotation techniques.
Note: This research is supported by Japan’s Tuning Test National Center.
YSEP international exchange student, B3 student
Interleaved space repetition system with sign language recognition through computer vision in sign language learning
This research aims to create an effective tool meant for English speakers to better learn American Sign Language (ASL) by taking advantage of an underutilized tool in computer vision for EdTech tools. Sign Language research has made many strides in both its recognition from computer vision and improving EdTech tools. There are lots of open source code recognizing the alphabet to more complicated models that can recognize a wide range of sentences from video input. While some apps have already utilized some simple Sign Language recognition models for EdTech tools, there still exists a gap in utilizing these models to build more complex but useful language learning tools. Current language learning tools that use the Spaced Repetition System (SRS) for Sign Language learning do not use any sort of computer vision. By combining the two, the application will answer the question of how effective computer vision with SRS would be for Sign Language learning acquisition.