Artificial Intelligence (AI) research has broad applications in real world problems. Examples include control, planning and scheduling, pattern recognition, knowledge mining, software applications, strategy games an others. The ever-evolving needs in society and business both on a local and on a global scale demand better technologies for solving more and more complex problems. Such needs can be found in all industrial sectors and in any part of the world.
The Multi-disciplinary International Conference on Artificial Intelligence (MIWAI), formerly called The Multi-disciplinary International Workshop on Artificial Intelligence, is a well established scientific venue in the field of artificial Intelligence. MIWAI was established more than 16 years. This conference aims to be a meeting place where excellence in AI research meets the needs for solving dynamic and complex problems in the real world. The academic researchers, developers, and industrial practitioners will have extensive opportunities to present their original work, technological advances and practical problems. Participants can learn from each other and exchange their experiences in order to fine tune their activities in order to help each other better. The main purposes of the MIWAI series of conferences are:
Artificial intelligence is a broad area of research.
We
encourage researchers to submit papers in the following areas but not limited to:
Submission link: https://www.easychair.org/conferences/?conf=miwai2024
Both research and application papers are solicited. All submitted papers will be carefully peer-reviewed on the basis of technical quality, relevance, significance, and clarity.
Each paper should have no more than twelve (12) pages in the Springer-Verlag LNCS style. The authors' names and institutions should not appear in the paper. Unpublished work of the authors should not be cited. Springer-Verlag author instructions are available at: https://www.springer.com/gp/computer-science/lncs/conference-proceedings-guidelines
The authors of each accepted paper must upload the camera ready version of the paper to MIWAI 2024’s submission website via Easychair by September 25, 2024 at 23:59 UTC-12. The camera ready version includes:
The authors of each accepted papers must send us a signed copyright form. One author may sign on behalf of all of the authors of a particular paper. The copyright form must be present and correct. In first three fields of the form, insert the following information:
The copyright form can be accessed here.
Each accepted paper should have registration fee paid by at least one of the authors by September 25, 2024 at 23:59 UTC-12 in order to include the paper in the LNAI proceedings. The fee details are given in the table below.
Important Note: At least one author of each accepted paper must register for the conference in order for their paper to be included in the LNAI. Additional (co)authors of that paper may also register if they wish to.
The early registration fee (Early-bird or before September 30, 2024) and the late registration fee (between October 1, 2024 and Novermber 11, 2024) is given in the table below.
In order to qualify to pay the "Participant" registration fee, the person planning to attend the conference must be a full-time student or a participant in an accredited institute. You need to bring the student status form (original copy) and present it while collecting the material at the registration desk in Pattaya, Thailand.
The registration fee will allow you to attend the conference and any other services that we are planning to provide during the conference. There will be morning and afternoon breaks and lunches on November 13-15. The reception dinner on the evening of November 14th (TBC) will also be covered. Registered authors will have access to the online version of the LNAI proceedings before and during the conference.
Note: The conference organizers WILL NOT be responsible for missing payments and/or any other problems related to the payments. The registration fee is non-refundable.
All payment deadlines are in the UTC-12 timezone.
Type of Authors | Type of Registration | Registration Fees (Pay methods are supported: Credit card and QR payment(only Thailand)) |
|
---|---|---|---|
Early-bird (USD)
Before September 25, 2024 |
On-Spot (USD)
Between October 1, 2024 and November 11, 2024 |
||
Presentation | Online and Virtual |
250
Register & Pay now |
- |
On-site |
500
Register & Pay now |
650 | |
Participant | On-site |
200
Register & Pay now (For non-authors) Before September 30, 2024 |
250 |
All deadlines are in UTC-12 timezone.
Event | Date |
---|---|
Submission Deadline | |
Notification Deadline | |
Camera Ready Deadline | |
Registration Deadline | |
Conference Dates | November 11-15, 2024 |
University of Linkoping, Sweden
University of Hyderabad, India
In the context of collaborative robotics, both distributed planning and task allocation, and acquisition of situation awareness are essential for supporting goal achievement, collective intelligence, and decision support in teams of robots and human agents. This is particularly important in applications pertaining to emergency rescue and crisis management. Given a high-level mission specification provided by a member of a rescue team, human or robotic, one then requires a mechanism for generating and executing complex, multi-agent distributed plans and tasks. The proper task representation is essential for both the generation and execution of complex multi-agent distributed tasks. Task Specification Trees have been proposed for this purpose and a Delegation Framework is used for distributed task allocation. Additionally, during operational missions, data and knowledge is gathered incrementally and in different ways by teams of heterogeneous robots and humans. We describe this as the formation and management of Hastily Formed Knowledge Networks (HFKN). The resulting distributed knowledge structures can then be queried by individual agents for decision support. These structures are represented as RDF graphs, and graph synchronization techniques are introduced to retain the consistency of the collective knowledge of a team. Flexible human interaction with teams of robots is also an essential component in emergency rescue. Integrating LLMs into the interaction process provides a new way to think about interaction.
In this talk, I will present both the HFKN and Delegation Frameworks, their integration, and in addition describe various field robotic experiments with UAVs which use the overall system. I will also show some initial work that uses LLMs in the interaction process. If time allows, I will also discuss a Swedish national project where this framework has been used by both industrial and academic partners in large public safety scenarios using UAVs, USVs, and AUVs in maritime and sea rescue scenarios.
Patrick Doherty is a Professor of Computer Science at the Department of Computer and Information Sciences (IDA), Linköping University, Sweden. He leads the Artificial Intelligence Lab at IDA. He is an ECCAI/EurAI fellow, a AAIA fellow, and a member of ACM and AAAI. He previously served as Editor-in-Chief of the Artificial Intelligence Journal. He has over 30 years of experience in areas such as knowledge representation and reasoning, automated planning, intelligent autonomous systems, and multi-agent systems. A major area of application is with Unmanned Aircraft Systems (UAS). He has over 200 refereed scientific publications in his areas of expertise and has given numerous keynote and invited talks at leading international conferences.
Navigation involves guiding a robot, drone, or any autonomous system through an unknown dynamic and complex environment by understanding its location and surroundings. "Perception" involves gathering and interpreting data from the environment through various sensors. This can include cameras, LIDAR, RADAR, ultrasonic sensors, GPS, and IMUs (Inertial Measurement Units). Thus the goal of "Perception" is to create a comprehensive understanding of the environment. Thus, in this talk we discuss the technology stack used to create such maps of a complex and dynamic environment while simultaneously tracking the location of a device within that environment. This talk will not cover "Autonomy", which involves making decisions and taking actions based on the information provided by the "Perception" system. Thus localisation and mapping are fundamental in understanding the environment for navigation and interaction. Visual Simultaneous Localization and Mapping (vSLAM) is one such technology stack and this talk aims to cover core algorithms of vSLAM.
There are many situations of dynamic and complex environments across different domains, where in the talk we highlight the specific challenges and technological solutions required to cope with them. Some examples are like Urban Traffic for Self-Driving Cars, Autonomous Robots in Manufacturing, Autonomous Agriculture (e.g., Harvesting Robots) etc.
Arun Agarwal completed his BTech (Electrical Engineering, IIT Delhi, India, 1979) and PhD (Computer Science, IIT Delhi, India, 1989). He joined University of Hyderabad in 1984 as a Lecturer and superannuated as a Senior Professor of Computer and Information Sciences in 2022. He also served as a Dean of the School (2015-2018). He was also a Pro-Vice-Chancellor-1 (2018-2021) and for a brief period Vice-Chancellor (2021) of University of Hyderabad.
He was a Visiting Scientist at The Robotics Institute, Carnegie-Mellon University, USA (1986) and Research Associate at Sloan School of Management, Massachusetts Institute of Technology, USA (1993-94). He has visited many other Universities and Institutes like: Monash and Melbourne University in Australia; National Center for High Performance Computing, Hsinchu, Taiwan; Chinese Academy of Sciences, Beijing, China; San Diego Supercomputing Centre USA; Mahasarakham University and NECTEC, Thailand; KISTI, South Korea; etc.
He is an elected Fellow of IETE (2003), elected Fellow of Telangana Akademi of Sciences (2011), and Senior Member of IEEE, USA (1998); He was Chairman of IEEE Hyderabad Section for the years 2001 and 2002. He is also recipient of the IEEE Region 10 Outstanding Volunteer Award in 2009 in recognition of his dedications and contributions. Awarded Outstanding Reviewer Status for Journal of Pattern Recognition, Elsevier, November 2015. He was felicitated by Indian Society for Rough Sets in recognition of Academic Excellence and promotion of Rough Set Activities 2017.
He has served on the technical program committee of numerous conferences in the area of Pattern Recognition and Artificial Intelligence. He was also on the Steering Committee of PRAGMA 2004-2015.
His areas of interest are in Computer Vision, Image Processing, Neural Networks, Grid and Cloud Computing. He has guided 18 PhD theses and more than 125 post-graduate dissertations and has published more than 100 papers. He had several projects and consultancy with several industry/research laboratories. Currently, he is Advisor to Zen Technologies Ltd, Hyderabad.
© Designed and Developed by UIdeck