Home      Log In      Contacts      FAQs      INSTICC Portal
 

Keynote Lectures

IC3K is a joint conference composed of three concurrent conferences: KDIR, KEOD and KMIS. These three conferences are always co-located and held in parallel. Keynote lectures are plenary sessions and can be attended by all IC3K participants.

Domenico Talia, University of Calabria and ICAR-CNR, Italy
          Title: Big Data Mining Services and Distributed Knowledge Discovery Applications on Clouds

Sonia Bergamaschi, DIEF - University of Modena and Reggio Emilia, Italy
          Title: Big Data Integration - State of the Art & Challenges

Michele Missikoff, CNR and Polytechnic University of Marche, Italy
          Title: Semantics of Innovation

Wil Van Der Aalst, Technische Universiteit Eindhoven, Netherlands
          Title: No Knowledge Without Processes - Process Mining as a Tool to Find Out What People and Organizations Really Do

Wim Van Grembergen, University of Antwerp, Belgium
          Title: A Research Journey into Enterprise Governance of IT

Marie-Jeanne Lesot, Université Pierre et Marie Curie - LIP6, France
          Title: Bridging the Emotional Gap - From Objective Representations to Subjective Interpretations

 

Big Data Mining Services and Distributed Knowledge Discovery Applications on Clouds
Domenico Talia
University of Calabria and ICAR-CNR
Italy


Brief Bio
Domenico Talia is a full professor of computer engineering at the University of Calabria and the director of ICAR-CNR. He is a partner of startups like Scalable Data Analytics, Exeura and Eco4Cloud. His research interests include parallel and distributed data mining algorithms, Cloud computing, Grid services, distributed knowledge discovery, mobile computing, green computing systems, peer-to-peer systems, and parallel programming.
Talia published ten books and about 300 papers in archival journals such as CACM, Computer, IEEE TKDE, IEEE TSE, IEEE TSMC-B, IEEE Micro, ACM Computing Surveys, FGCS, Parallel Computing, IEEE Internet Computing and international conference proceedings. He is a member of the editorial boards of IEEE Transactions on Computers, the Future Generation Computer Systems journal, the International Journal on Web and Grid Services, the Scalable Computing: Practice and Experience journal, MultiAgent and Grid Systems: An International Journal, International Journal of Web and Grid Services, and the Web Intelligence and Agent Systems International journal. Talia has been a project evaluator for several international institutions such as the European Commission, the Aeres in France, the Austrian Science Fund, the Croucher Foundation, and the Russian Federation Government. He served as a program chair, organizer, or program committee member of several international scientific conferences and gave many invited talks and seminars in international conferences and schools. Talia is a member of the ACM and the IEEE Computer Society.


Abstract
Digital data repositories are more and more massive and distributed, therefore we need smart data analysis techniques and scalable architectures to extract useful information from them in reduced time. Cloud computing infrastructures offer an effective support for addressing both the computational and data storage needs of big data mining and parallel knowledge discovery applications. In fact, complex data mining tasks involve data- and compute-intensive algorithms that require large and efficient storage facilities together with high performance processors to get results in acceptable times. In this talk we introduce the topic and the main research issues, then we present a Data Mining Cloud Framework designed for developing and executing distributed data analytics applications as workflows of services. In this environment we use data sets, analysis tools, data mining algorithms and knowledge models that are implemented as single services that can be combined through a visual programming interface in distributed workflows to be executed on Clouds. The first implementation of the Data Mining Cloud Framework on Azure is presented and the main features of the graphical programming interface are described.

 

Big Data Integration - State of the Art & Challenges
Sonia Bergamaschi
DIEF - University of Modena and Reggio Emilia
Italy


Brief Bio
Sonia Bergamaschi was born in Modena (Italy) and received her Laurea degree in Mathematics from Università di Modena on 1977. She is currently full professor of Computer Engineering at the Engineering Department “Enzo Ferrari” - Università di Modena e Reggio Emilia  and leads the "DBGROUP", i.e. the database research group (www.dbgroup.unimo.it).
Her research activity has been mainly devoted to knowledge representation and management in the context of very large databases facing both theorical and implementation aspects. Since 1985 she was very active in the area of coupling artificial intelligence (Description Logics) and database techniques to develop Intelligent Database Systems. On this topic very relevant theoretical results have been obtained and a system ODB-Tools performing consistency check and semantic query optimization In Object Oriented Databases, based on this theoretical results, has been developed. Since 1999, her research efforts have been devoted to the Intelligent Information Integration research area. A data integration  system, called MOMIS, which provides an integrated access to structured and semistructured data sources and permits to pose a single query and receive a single unified answer has been developed. On 2009 she founded the academic (UNIMORE) start-up “DataRiver” whose  aim was the delivering of an open source version of the MOMIS system (first release on april 2010 (www.datariver.it). Since 2010 (and up to now) her research activities has been extended to: Keyword Search on databases, Semantic Web and  automatic annotation of data sources. Recently, her research efforts has been devoted to Big Data and Big data Analytics. Sonia Bergamaschi was  coordinator and participant of many ICT european projects:  SEWASIE (2002-2005),  WINK (2002-2003), STASIS (2006-2009), FACIT-SME (2010-2012), Keystone (2013-2017). She was coordinator of the MURST FIRB project “NeP4B (Networked Peers for Business)”(2006-2009).
She has published more than one hundred international journal and conference papers and her researches have been founded by the Italian MURST, CNR, ASI institutions and by European Community projects. She has served on the committees of international and national Database and AI conferences. She is a member of the IEEE Computer Society and of the ACM. For a detailed description of the research activity and of the developed systems see: www.dbgroup.unimo.it.


Abstract

Big data is a popular term for  describing the exponential growth, availability and use of information, both structured and unstructured. Much has been written on the big data trend and its potentiality for innovation and growth of enterprises. The advise of IDC (one of the premier advisory firm  specialized in information technology) for organizations and IT leaders is to focus on the ever-increasing volume, variety and velocity of information that forms big data.
In most cases, such huge volume of data comes from multiple sources and across heterogeneous systems, thus, data have to be  to linked, matched, cleansed and transformed. Moreover,  it is necessary to determine how disparate data relates to common definitions and how to systematically integrate structured and unstructured data assets to produce useful, high-quality and up-to-date information.
The research area of Data Integration, active since 90s, provided good techniques for facing  the above issues in a unifying framework, Relational Databases (RDB), with reference to a less complex scenario (smaller volume, variety and velocity). Moreover, simpler forms of integration among different databases can be efficiently resolved by Data Federation technologies used for DBMS today.
Adopting RDB as a general framework for big data integration and solving the issues above, namely volume, variety, variability and velocity, by using more powerful RDBMs technologies enhanced with data integration techniques is a possible choice. On the other hand, new emerging technologies came into play: NOSQL systems and technologies, datawarehouse appliances platforms provided by the major software players, data governance platforms, etc.
In this talk, prof. Sonia Bergamaschi will provide an overview of this exciting field that will become more and more important.

 

Semantics of Innovation
Michele Missikoff
CNR and Polytechnic University of Marche
Italy


Brief Bio
Dr. Michele Missikofffounder and Scientific Advisor of the Laboratory of Enterprise and Knowledge Systems, and past Director of Research, at IASI (Institute for Systems Analysis and Informatics) of the CNR, Italian National Research Council; Senior Scientific Advisor of the CeRSI research center at LUISS University, past director of CEK: Center for Enterprise Knowledge at Free University of International Studies of Rome (UNINT) where he teaches Enterprise Information Systems. He is currently the Scientific Coordinator of the European Project BIVEE. For 2 decades he cooperated with the Euroepan Commission in the context of the EC IST FP6 and FP7, in the area of  eGov first and eBusiness later, acting as evaluator, reviewer, rapporteur. More recently, he coordinated the European Task Force for the FInES (Future Internet Enterprise Systems) Research Roadmap 2025, in the DG INFSO of the European Commission. He managed and participated in more than 20 European and national projects. He has a long-time research experience in databases, knowledge representation, and semantic technologies. He served in the Program Committees of primary international conferences and in the editorial boards of international journals, acting as General Chair and Program Committee Chair in international conferences, such as CoopIS’98, CAiSE’03 and CoopIS’14, as well as Industrial Chair in VLDB’01. He is co-founder and past president of the international EDBT Foundation. He authored more than 150 scientific papers.


Abstract

Innovation is today one of the most used terms, when talking about strategies to recover from the current economic downturn. However, in the large majority of cases, the term is used as a generic 'place holder', a sort of container whose actual content is left to the intuition. If you ask questions, trying to get a deeper understanding, then you realise that the notion of 'innovation' is very rich and articulated and, in parallel, ideas are in general rather vague.

In the European project BIVEE, active since 3 years, we studied business and enterprise innovation, proposing a solution based on the use of semantic technologies, with a focus on Virtual Enterprises (essentially, networks of cooperating SMEs). Innovation in its essence is seen as a complex, ill-defined process of knowledge enrichment: starting from a given problem or idea, the process requires a large amount of (existing) knowledge to produce the new knowledge, necessary to solve the given problem and/or transform the idea into a concrete product (tangible or intangible).

This talk starts with an overview on the broad, encompassing notion of innovation, with its facets and articulation, and then illustrate an approach to innovation support and management based on a knowledge-centric approach. Such an approach has the desirable property to be largely independent from a specific industrial sector, and can be easily adapted to different kinds of production, from manufacturing to the service industry.

 

No Knowledge Without Processes - Process Mining as a Tool to Find Out What People and Organizations Really Do
Wil Van Der Aalst
Technische Universiteit Eindhoven
Netherlands


Brief Bio
Prof.dr.ir. Wil van der Aalst is a full professor of Information Systems at the Technische Universiteit Eindhoven (TU/e). He is also the Academic Supervisor of the International Laboratory of Process-Aware Information Systems of the National Research University, Higher School of Economics in Moscow. Moreover, since 2003 he has a part-time appointment at Queensland University of Technology (QUT). At TU/e he is the scientific director of the Data Science Center Eindhoven (DSC/e). His personal research interests include workflow management, process mining, Petri nets, business process management, process modeling, and process analysis. Wil van der Aalst has published more than 165 journal papers, 17 books (as author or editor), 350 refereed conference/workshop publications, and 60 book chapters. Many of his papers are highly cited (he has an H-index of more than 104 according to Google Scholar, making him the European computer scientist with the highest H-index) and his ideas have influenced researchers, software developers, and standardization committees working on process support. He has been a co-chair of many conferences including the Business Process Management conference, the International Conference on Cooperative Information Systems, the International conference on the Application and Theory of Petri Nets, and the IEEE International Conference on Services Computing. He is also editor/member of the editorial board of several journals, including Computing, Distributed and Parallel Databases, Software and Systems Modeling, the International Journal of Business Process Integration and Management, the International Journal on Enterprise Modelling and Information Systems Architectures, Computers in Industry, Business & Information Systems Engineering, IEEE Transactions on Services Computing, Lecture Notes in Business Information Processing, and Transactions on Petri Nets and Other Models of Concurrency. In 2012, he received the degree of doctor honoris causa from Hasselt University. In 2013, he was appointed as Distinguished University Professor of TU/e and was awarded an honorary guest professorship at Tsinghua University. He is also a member of the Royal Holland Society of Sciences and Humanities (Koninklijke Hollandsche Maatschappij der Wetenschappen) and the Academy of Europe (Academia Europaea).


Abstract
Recently, process mining emerged as a new scientific discipline on the interface between process models and event data. Whereas conventional Business Process Management (BPM) approaches are mostly model-driven with little consideration for event data, the increasing availability of high-quality data enables management decisions based on “evidence” rather than PowerPoints or Visio diagrams. Process mining can be used to (better) configure BPM systems and check compliance. Moreover, the high-quality event logs of BPM systems allow for advanced forms of process mining such as prediction, recommendation, and trend analysis. The challenge is to turn torrents of event data ("Big Data") into valuable insights related to performance and compliance. The results can be used to identify and understand bottlenecks, inefficiencies, deviations, and risks. Process mining helps organizations to "mine their own business": they are enabled to discover, monitor and improve real processes by extracting knowledge from event logs. In his talk, prof. Wil van der Aalst will provide an overview of this exciting field that will become more and more important for BPM.

 

A Research Journey into Enterprise Governance of IT
Wim Van Grembergen
University of Antwerp
Belgium


Brief Bio
Wim Van Grembergen is a full professor at the Economics and Management Faculty of the University of Antwerp (UA) and executive professor at the Antwerp Management School (AMS).  He teaches information systems at master and executive level, and researches in IT governance, IT strategy, IT performance management and the IT balanced scorecard.  Within his IT Alignment and Governance (ITAG) Research Institute (www.uams.be/itag) he conducts research for ISACA/ITGI on IT governance and supports the continuous development of COBIT and VAL IT. Currently he is evolved in the development of COBIT 5. Dr. Van Grembergen is a frequent speaker at academic and professional meetings and conferences and has served in a consulting capacity to a number of firms. He has several publications in leading academic journals and published books on IT governance and the IT balanced scorecard. His most recent book “Enterprise Governance of IT. Achieving strategic alignment and value” is published in 2009 (Springer, New York).


Abstract
Enterprise governance of IT is a relatively new concept in literature and is gaining more interest in the academic and practitioner’s world. Enterprise governance of IT addresses the definition and implementation of structures, processes and relational mechanisms that enable both business and IT people to execute their responsibilities in support of business/IT alignment and the creation of value from IT-enabled business investments. In his research talk Wim Van Grembergen will discuss the important theories and practices around Enterprise governance of IT and give an overview of his ten year research on this topic. He will also introduce the  recently published COBIT 5 framework. This new practitioner’s framework now clearly makes a distinction between IT governance and IT management and offers interesting opportunities for future research.

 

Bridging the Emotional Gap - From Objective Representations to Subjective Interpretations
Marie-Jeanne Lesot
Université Pierre et Marie Curie - LIP6
France


Brief Bio
Marie-Jeanne Lesot obtained her PhD in 2005 from the University Pierre et Marie Curie in Paris. Since 2006 she is an associate professor in the department of Computer Science Lab of Paris 6 (LIP6) and member of the Learning and Fuzzy Intelligent systems (LFI) group. Her research interests focus on fuzzy machine learning with an objective of data interpretation and semantics integration and, in particular, to model and manage subjective information; they include similarity measures, fuzzy clustering, linguistic summaries, affective computing and information scoring.


Abstract
In the framework of affective computing, emotion mining constitutes a classification task that aims at recognising the emotional content of various types of data including, but not limited to, texts, images or physiological signals. It adds to the traditional semantic gap, between
low-level numerical data descriptions and their high-level conceptual interpretations, the difficulty of going from an objective to a subjective representation.

After discussing the difficulty of a computational model of the labels to be considered in this specific classification task, due to the essential ambiguity and imprecision of emotions, the talk will illustrate the shift from numerical data representations to the emotions the data convey, through the integration of intermediate subjectivity levels, exploiting either external knowledge to include emotional information in the objective representation, or a subjective non-emotional level.

footer