Organised by

Applied Intelligence & Data Analysis

Proceedings will be part of the International Conference Proceedings Series

Keynote Speakers

Physical Cyber Social Computing: An early 21st century approach to Computing for Human Experience

Amit P. Sheth Amit P. Sheth is an educator, researcher, and entrepreneur. He is a LexisNexis Eminent Scholar (an endowed faculty position, funded by LexisNexis and the Ohio Board of Regents) at Wright State University. He directs the Ohio Center of Excellent in Knowledge-enabled Computing (Kno.e.sis), which conducts research in Web 3.0 and applications to healthcare and life sciences, cognitive science, and defense/intelligence. Kno.e.sis' activities have resulted in Wright State University recognized as a top organization in the world on World Wide Web. He is a professor of Computer Science & Engineering, and Biomedical Sciences PhD program. Earlier, he was a Professor at the University of Georgia where he founded and directed the LSDIS Lab. Prior to that, he served in R&D groups at Bellcore, Unisys, and Honeywell. Prof. Sheth's research has led to several commercial products, many real-world applications and two companies which he founded and managed in various executive roles (President/CEO/CTO): Infocosm, and Taalee/Voquette/Semagix, which was likely the first company that developed Semantic Web applications and application development platforms. Professor Sheth is an IEEE Fellow and has received recognitions such as the IBM Faculty award. He is a highly cited author in Computer Science (h-index of 74) and among the top authors in WWW and databases. He has given over 200 invited talks and colloquia including over 40 keynotes, and (co)-organized/chaired 65+ conferences/workshops. He is on several journal editorial boards, is the Editor-in-Chief of the International Journal on Semantic Web and Information Systems (IJSWIS) and the joint-EIC of Distributed & Parallel Databases Journal.

Physical Cyber Social Computing:
An early 21st century approach to Computing for Human Experience

Amit Sheth, Ohio Center of Excellence in Knowledge-enabled Computing (Kno.e.sis)
Wright state University, Dayton OH USA

The proper role of technology to improve human experience has been discussed by visionaries and scientists from the early days of computing and electronic communication. Technology now plays an increasingly important role in facilitating and improving personal and social activities and engagements, decision making, interaction with physical and social worlds, generating insights, and just about anything that an intelligent human seeks to do. I have used the term Computing for Human Experience (CHE) [1] to capture this essential role of technology in a human centric vision. CHE emphasizes the unobtrusive, supportive and assistive role of technology in improving human experience, so that technology “takes into account the human world and allows computers themselves to disappear in the background” (Mark Weiser [2]).

In this talk, I will portray physical-cyber-social (PCS) computing that takes ideas from, and goes significantly beyond, the current progress in cyber-physical systems, socio-technical systems and cyber-social systems to support CHE [3]. I will exemplify future PCS application scenarios in healthcare and traffic management that are supported by (a) a deeper and richer semantic interdependence and interplay between sensors and devices at physical layers, (b) rich technology mediated social interactions, and (c) the gathering and application of collective intelligence characterized by massive and contextually relevant background knowledge and advanced reasoning in order to bridge machine and human perceptions. I will share an example of PCS computing using semantic perception [4], which converts low-level, heterogeneous, multimodal and contextually relevant data into high-level abstractions that can provide insights and assist humans in making complex decisions. The key proposition is to explain that PCS computing will need to move away from traditional data processing to multi-tier computation along data-information-knowledge-wisdom dimension that supports reasoning to convert data into abstractions that humans are adept at using.

Keywords: Computing for Human Experience, Machine-Human-Social-Semantic Perception, Semantic Abstraction, Physical-Cyber-Social Systems, Physical-Cyber-Social Computing

[1] A. Sheth, Computing for Human Experience
[2] M. Weiser, The Computer for 21st Century
[3] A. Sheth, Semantics empowered Cyber-Physical-Social Systems
[4] C. Henson, A. Sheth, K. Thirunarayan, Semantic Perception: Converting Sensory Observations to Abstractions

Contextual Synchronization on Social Collaboration

Jason J. Jung Jason J. Jung is an assistant professor of Computer Engineering Department at Yeungnam University, Korea. He was a postdoctoral researcher in INRIA Rhone-Alpes, France in 2006, and a visiting scientist in Fraunhofer Institute (FIRST) in Berlin, Germany in 2004. He received the B.Eng. in Computer Science and Mechanical Engineering from Inha University in 1999. He received M.S. and Ph.D. degrees in Computer and Information Engineering from Inha University in 2002 and 2005, respectively. His research topics are knowledge engineering on social networks by using machine learning, semantic Web mining, and ambient intelligence. He has about 25 international journal articles published in Knowledge-Based Systems, Information Retrieval, Information Processing & Management, Knowledge and Information Systems, and Expert Systems with Applications. Also, he is an editorial member of Journal of Universal Computer Science and International Journal of Intelligent Information and Database Systems. Moreover, he has been editing 10 special issues in Information Sciences, Journal of Network and Computer Applications, Computing and Informatics and so on.
Contextual Synchronization on Social Collaboration

Jason J. Jung

Since working environments are dramatically changing, it is difficult to support collaboration among people. In this talk, I will claim that the context should be efficiently synchronized.
To efficiently support collaborations between people (agents) in real-time, we propose an ontology-based platform for acquainting the most relevant users (e.g., colleagues and classmates), according to their context.
Thereby, we have modeled two kinds of contexts with semantic information derived from ontologies; (i) personal context, and (ii) consensual context, integrated from several personal contexts.
More importantly, we formulate measurement criteria to compare them. Consequently, groups can be dynamically organized with respect to the similarities among several aspects of personal context. In particular, users can engage in complex collaborations related to multiple semantics. For experimentation, a social browsing system has been implemented based on context synchronization.

Guidelines for Multilingual Linked Data

Prof. Dr. Asunción Gómez-Pérez Prof. Dr. Asunción Gómez-Pérez is Full Professor at the Univ. Politécnica de Madrid. She is the Director of the Artificial Intelligence Department (2008) and Director or the OEG at UPM (1995). Her main research areas are: Ontological Engineering, Semantic Web and Knowledge Management. She led at UPM the following EU projects: Ontoweb, Esperonto, Knowledge Web, NeOn, SEEMP, OntoGrid, ADMIRE, DynaLearn, SemSorGrid4Env, SEALS and MONNET. She coordinated OntoGrid and now she is coordinating SemSorGrid4Env and SEALS. She is also leading at UPM projects funded by Spanish agencies. The most relevants are: España Vitual, WEBn+1, GeoBuddies and the Spanish network on Semantic Web. She has published more than 150 papers and she is author of one book on Ontological Engineering and co-author of a book on Knowledge Engineering. She has been co-director of the summer school on Ontological Engineering and the Semantic Web since 2003 up to now. She was program chair of ASWC'09, ESWC'05 and EKAW'02 and co-organiser of many workshops on ontologies.
Guidelines for Multilingual Linked Data

Prof. Dr. Asunción Gómez-Pérez

In this talk, we argue that there is a growing number of linked datasets in different natural languages, and that there is a need for guidelines and mechanisms to ensure the quality and organic growth of this emerging multilingual data network. However, we have little knowledge regarding the actual state of this data network, its current practices, and the open challenges that it poses. Questions regarding the distribution of natural languages, the links that are established across data in different languages, or how linguistic features are represented, remain mostly unanswered. Addressing these and other language-related issues can help to identify existing problems, propose new mechanisms and guidelines or adapt the ones in use for publishing linked data including language-related features, and, ultimately, provide metrics to evaluate quality aspects.

In this talk we review, discuss, and extend current guidelines for publishing linked data (1) by focusing on those methods, techniques and tools that can help RDF publishers to cope with language barriers, and (2) by identifying existing gaps, remaining research and technical challenges. Whenever possible, we will illustrate and discuss each of these guidelines, methods, and tools on the basis of practical examples that we have encountered in the publication of the dataset.


Discovery Hub: a discovery engine on the top of DBpedia

Authors: Nicolas Marie, Fabien Gandon, Damien Legrand, Myriam Ribière.
Abstract: This tutorial supports the Discovery Hub demonstration proposal. Web growth, both in size and diversity, and users’ growing expectations increase the need for innovative search approaches and technologies. Exploratory search systems are built specifically to help user in cognitive consuming search tasks like learning or investigation. Some of these systems are built on the top of linked data and use its semantic richness to provide cognitively-optimized search experiences. This paper presents the Discovery Hub operational prototype after detailing its Real Time Spreading Activation (RTSA) algorithm. This latter processes linked data in real-time and does not require partial or total results pre-processing. This real-time processing offers advantages in terms of data dynamicity-handling and querying flexibility

Data Processing and Semantics for Advanced Internet of Things (IoT) Applications: modeling, annotation, integration, and perception

Authors: Pramod Anantharam, Payam Barnaghi, Amit Sheth
Abstract: This tutorial presents tools and techniques for e ectively utilizing the Internet of Things (IoT) for building advanced applications, including the Physical-Cyber-Social (PCS) systems. The issues and challenges related to IoT, semantic data modelling, annotation, knowledge representation (e.g. modelling for constrained environments, complexity issues and time/location dependency of data), integration, analysis, and reasoning will be discussed. The tutorial will describe recent developments on creating annotation models and semantic description frameworks for IoT data (e.g. such as W3C Semantic Sensor Network ontology). A review of enabling technologies and common scenarios for IoT applications from the data and knowledge engineering point of view will be discussed. Information processing, reasoning, and knowledge extraction, along with existing solutions related to these topics will be presented. The tutorial summarizes state-of-the-art research and developments on PCS systems, IoT related ontology development, linked data, domain knowledge integration and management, querying largescale IoT data, and AI applications for automated knowledge extraction from real world data.


The 3rd International Conference on Web Intelligence, Mining and Semantics (WIMS'13) is organised under the auspices of Autonomous University of Madrid . The site for the previous editions, WIMS'12 and WIMS'11, can be found here.

This is the third in a new series of conferences concerned with intelligent approaches to transform the World Wide Web into a global reasoning and semantics-driven computing machine.

Supported by