EOLE fourth webinar on open source and open science within public administrations

EOLE 2020 Webinar #4 : Open source and open science within public administrations


This Webinar is part of the 13th edition of the EOLE conference which is held online this year due to the worldwide pandemic.

Initiative born in 2008 the European Open Source & Free Software Law Event (EOLE) aims to promote the share and dissemination of legal knowledge related to free software, as well as the development and promotion of good practices.


This fourth webinar, which focuses on "Open source and open science within public administrations", has been facilitated by Célya Gruson-Daniel and was divided into the five following parts :

  • 1. "Computer modeling, open sourcing and open science" - Alexis Drogoul (00:04:55)
  • 2. "Open source in open science - or how to facilitate technological trust production" - Alexandra Giannopoulou
  • 3. "Open Source & Open Science @CERN" - Myriam Ayass
  • 4. "Trace, test, treat: data failure in a pandemic" - Carlo Piana
  • 5. Question time

1. Computer modeling, open sourcing and open science - Alexis Drogoul

(00:04:55)

Let’s start with our first presentation by Alexis Drogoul who is working at IRD as a senior researcher. Since 2007 he has been working in Vietnam, to enhance the research capacity of Vietnamese teams and the design of models for environmental decision support and adaptation to climate change. He will share his experience in the development of COMOKIT, an open source agent based modeling and the main reason for using OS in the emerging field of sustainability science.

Alexis Drogoul graduated in AI in 1990 and received his PhD. degree from the University of Paris 6 in 1993. Recruited in 1995 as associate professor, he became full professor in 2000 and joined the IRD as a senior researcher in 2004. He works on agent-based simulation of complex systems, mainly by developing the GAMA platform. Since 2007, he has been working in Vietnam to enhance the research capacity of Vietnamese teams (IFI-MSI, CTU-DREAM, USTH-ICTLab, TLU-WARM) on the design of models for environmental decision-support and adaptation to climate change, in the framework of several international research projects. In addition, he is since 2017 the representative of IRD in Vietnam and Philippines.

Context

In 2020, like many other countries, Vietnam has been facing a lot of different threats resulting from complex interactions between society and its different environments and ecosystems. These different threats caused a lot of material and human losses and launched a number of work: Pandemics (much more controlled here than it is in Europe), Natural catastrophes, atmospheric pollution, global environmental changes. In Vietnam like any other country, the social and political demand is to better understand those challenges, not only to improve scientific knowledge but also to be able to anticipate and/or mitigate them. That’s why in these different threats, scientists are at the front lines of public opinion, communication, and policy making. The problem we have however is that these different threats and phenomena involve people and are much too complex to use the classical experimental approach. These are not only physical and biological problems, we are dealing with people and societies, and one of the only possible approach, that has been used a lot in these last years, is to work and reason on models.

Computer Models

A model is a simplified and abstract representation of a reference system. The system being something of interest on which someone (policy maker, scientist,…) has one or several questions and wants to answer. Computer models are models in which the representation is a computer program. Moreover, computer programs can be simulated, which means that the computer program can be executed like any other program on a computer.

They can be used for visualization, training, control, forecasting, decision support,… It has become something very efficient and very natural. We are living in a world where people do simulations every day (for gaming for example). In a more scientific approach simulations model what experiments would be to the system. It’s used to replace experimentation when something is not possible for ethical and practical reasons. It really is an alternative and more ethical approach. One you have probably heard, is to be able to build a digital twin, replications of real systems on which we are able to conduct experiments, called simulations.

There are many techniques that can be used when it comes to represent complex systems. One is particularly interesting because it allows to represent the behavior of people which is agent-based-modeling : based on ‘individual centered’ representation. Basically, it is a way to represent the world where you will build digital individuals, which we call “agents” using small programs. The simulation is gathering all those little programs making them interact in artificial environments. The interest of this approach compared to mathematical modeling is that it allows to reconstruct and simulate virtual worlds on a computer to explore many different scenarios. You can conduct different policies and see what impact they have on people, you can change the behavior of people and explore millions of scenarios in terms of organization, social impact, economic impact, etc. These models simulation are gathering all those programs and making them interact in virtual environments.

These models are very detailed compared to more abstract mathematical ones which sometimes only consists in one equation. Therefore usually, they are themselves very complex pieces of software which rely on complex data sets: GIS data, quantitative and qualitative surveys, demographic data, etc. In order to be able to build them, people will rely on modeling platforms such as the GAMA platform.

GAMA

GAMA is an adventure that began 14 years ago in Vietnam. It has been built and designed by a French and Vietnamese consortium. It’s a free and open source modeling and simulation platform which was created in Vietnam but since then, that has been developed by an international consortium. In a few figures, compared to commercial software it is quite small, but quite big when you consider the amount of communities that could actually be interested in this kind of environment. Around 4000 users as for 2019 are relying on between 15 to 25 contributors that have been active for around 13 years. GAMA has been open source since 2008, for several reasons : the usual robustness, maintenance and software evolution, the possibility for an institute like IRD (which is present in 37 countries) to support distributed cooperative work in order to involve people from different countries to work together at the same time.

Also, and probably the most important reason, open source allow researchers to open what we can call the “black box” of simulations. Like for example the generation of random numbers, the algorithms which are used for scheduling agents, the primitives, the things written in native languages that can used in models… All these different things, even though people don’t usually know what they are composed of, have an influence on the outcome of simulations, and it is very important to understand how they are written. Of course, not everybody can read the code and understand, but the simple idea that the code is available/changeable is something very important when it comes to the trust and confidence in simulations. Open sourcing simulation platforms like GAMA are not enough, because all these outcomes (the model itself but also the outcomes of simulation) need to be open as well, because their influence on policy making becomes global. Now, we have more ad more details and realistic models which raise a lot of challenges: need to be understandable and trustworthy. For all of this, model need to be and remain open. Since the beginning authors of models in GAMA have been strongly encouraged to make their models open source. It has not always been possible (ownership of data for example), but we encourage them, first in helping authors of models to open on repositories, to rely on open data. For example at the IRD we have a dataverse repository and warehouse, which we use for that. A more radical choice is that the source code of model is always available to be viewed and edited by users, even when they use the demo mode. It’s always possible to see what’s inside the model.

Example of the Comokit Model

Its goal was to support Vietnamese authorities in their fight against Covid. It became open source very quickly in order to be able to apply to other case studies, but also to extend it. If you want to have a look at Comokit, it’s an interesting piece of software and a good example of what can be done in OS modeling right now. It’s already applied to different case studies and open to different contributions.

I will just finish by saying that open sourcing such models but also the code of platform is very important, but it requires from public institutions a lot of resources and it goes well beyond opening the code. We need to keep track and a history of the different running versions, we need to write and maintain documentation that goes beyond the technical documentations, something readable about the questions the model can answer.

Conclusion

we need to build and maintain dedicated websites on how to extend the model. A lot of resources are not always available to public institutions, given the fact that Climate change, air pollution and future pandemics may appear, they will give a growing importance to models in the public debate. I think that a European public initiative that would be open to and incite researchers to share their models would be beneficial.


2. Open source in open science or how to facilitate technological trust production - Alexandra Giannopoulou

(00:23:40)

Alexandra Giannopoulou is a postdoctoral researcher at the Blockchain and Society Policy Lab at the Institute for Information Law (IViR), University of Amsterdam. Her PhD thesis, obtained in 2016 from the University of Paris II Panthéon Assas and entitled ‘Les licences Creative Commons’, is forthcoming at editions L’Harmattan. She is an associate researcher at the CNRS Centre for Internet and Society in Paris. Her current research focuses on privacy and data protection, digital identity, and decentralization. Today she will explain in more details, the place of open source in open science and how it is possible to facilitate technological trust transition and also the place of legal tools, especially licenses, and its standardization.

How does open source fit in open science ?

We have some of the main principles of openness, that would justify and play a grounding role in creating what is now open source. Whether it is the need for transparency, the need for verifiability, sharing collaboration and ultimately all of that leads to trust. These principles of openness expanded to the whole research life cycle. We now talk about the main battle ground in open science wich of course would be open access, then open access to research data and then open source using open source tools to analyze this data and to create this more decentralized collaborative environment; going further to open peer review and collaborative citizen science etc. So this evolving scheme of doing research in the current environment also fits very well in the principles of openness. So we do need transparency, verifiability and ultimately trust. And not only for the research being produced but also the tools being used by researchers.

Defining open source ; codes,

When I talk about open source we mean different things depending on who is speaking :

  • Software : in this case it is the actual software and code being released to the public with a possibility to inspect and review it.
  • Set of principles : principles that are being distilled into creating this definition of what is open source: Free redistribution, non discrimination, freedom to derive works, maintain integrity of the author’s work.
  • Licensing : which is the legal instrument to ensure that these principles are incorporated and applicable to the public with whom the code provider shares the code. Even at this level there is this approach that creates this definition and then we have different licenses. There is this very complex network of many open source licenses that we use.

Open source in the applicable normative framework

What started several years ago as this “bottom-up” approach, has actually gained a lot of formalization in the legal framework of different countries or within the European Union :

  • In France : We have the Digital Republic bill which pushes publicly funded research to operate according to the open principles (to publish open access, to use open research data and to push open source) for more trust production through the instruments that we use in our higher education institutions.
  • In Europe : There are a lot of different strategies, so we are going to talk about the European Commission Digital Strategy plan, the Open Source Software Strategy and the already existing infrastructures such as the Open Science Cloud and a lot of principles that refer to transparency, trust, democratic society, ensuring openness and the possibility for everyone to be able to verify the research that has been created. All of this needs to operate within this complex network of different licenses. Now this looks like already a big win, however, to this day, there remains a miriad of shortcomings in this legal environment that has been created and in the practice, by actually being applied by different researchers.

Between theory and practice : open source points of friction

As I said earlier, we see this growing need for digital infrastructures for our academic institutions (with the use of digital tools in our daily tasks, etc.). So we see this growing trend for tech-platformization of education and academic research. This comes with a lot of issues from a corporate and data production point of view. Depending on the country these infrastructures sometimes vary between the institutions that have the financial capacity and knowledge to build their own infrastructures and create their own sets of principles around this infrastructure.

The way that we build the infrastructures is going to dictate the type of research that we’ll be doing in the future. So all the previous rules, if they don’t go far enough and can’t suffice to support the researchers to participate in the decision making and the governance but also accompany them in the problems that one might face when needing to publish open access and needing to use open source tools when there s no real support and explanation behind. And, even as a lawyer, having to articulate the interoperability between the licenses and all the different sets of rules is very hard for researchers to make sure that they are respecting what the license is telling them. And of course there is the license standardization that is an on going discussion in the community, and depending on the discipline we should consider supporting and reinforcing the use of specific licenses and tools. Rules tend to become empty if there is no enforcement to ensure that the researchers have what they need and push them to respect those rules.


3. Open Source & Open Science - Myriam Ayass

(00:38:15)

Myriam Ayass is a Legal Advisor for the Knowledge Transfer Group at CERN, the European Organization for Nuclear Research, and is specialised in Intellectual Property Law. She joined CERN in 2005, and has been working in the field of technology transfer since that date. It is in this context that she became involved with Open Source in general, and more specifically with Open Hardware. As one of the authors of the CERN Open Hardware Licence she will now present us the combination of OS and open science, especially on software and hardware dissemination.

Introduction of what CERN is

CERN is a research organization in the field partical physics founded in 1994, the lab sits between France and Switzerland. We built accelerators, the most recent one is a twenty seven kilometer long circular accelerator and let’s say CERN is probably one of Europe’s first joined ventures of science for peace so to speak. What we are trying to do, there is a big team of physicists and engineers and technicians trying to see what constitutes the basic constituents of matter. We have accelerators, we also have some detectors that are built by other people.

At CERN we have about 2500 staff, 1600 other paid personnel and 13000 scientific users. So we are a laboratory that has built this accelerator and a lot of people come at CERN to conduct their experiment. We have an operational budget of 1 billion. Over 600 institutes and universities use the facilities. Even though we have 23 members states today, it was founded by 12 member states, we also have other types of associates or observer entities coming from much beyond those countries.

CERN functions around 4 main pillars: science, technology, training and cooperation. This is really what we’re here to do, it’s important to remember this because it underpins how the organization works, including vis à vis open science and open source. This is also fundamentally about impact.

The Unique legal environment of CERN

We are treaty based scientific IGO (inter-governmental organisation), which means that we are a bit different from CNRS… We are operational rather than political, so also quite different from the UN. We sit between Switzerland and France, so there are a lot of legal questions around this that we need to solve. We are probably one of the worlds biggest and most diverse scientific collaboration. We have a governing body: the CERN council. So we have a convention which states what CERN is created for: which is science for peace. And this is also stated in the convention “the organization shall have no concern with military endeavors” and this also comes up in open science ad open source.

We are a sui generis organization which is quite different from the corporate world, but also academia and PROs and RTOs. We basically are a research infrastructure, and the reason why this is important is because we don’t have the same concerns as other types of academia, we don’t have to fight for funds, we are funded by our member states. We are here to develop accelerators, detectors and ICT infrastructure required to host the program.

CERN is a scientific environment. Because we have this enormous number of institutes and people coming at CERN, we have traditionally worked under an open science model. I would even go further and say that we are pioneers in the evolution of the model through our various efforts regarding open access and open data. The convention itself says that the results of CERN’s work shall be published or otherwise made widely available. This has traditionally been interpreted to be open science. We also have done a lot of open innovation, we have worked with the industry by helping them innovate on certain issues. There has been a blurring of boundaries between open science and open innovation.

Here are a few examples of what we’ve done :

  • Data : We generate a lot of data and it is openly available on our website where you can explore more than two petabytes of data on particle physics. It’s not only datasets but also software, documentation, etc.
  • Publications : We have also here quite the innovative scientific information service, because they are playing a leading role in the open access movement, they have launched and participated in the scope 3 consortium, which was really a push for open access models. From 2014 to 2018 we went from most HEP publications being behind paywalls to nearly 90% of scientific articles being available to everyone. We also have concluded some reads and publish agreements.
  • Software : We do a lot of different things, not only in terms of software, but also in terms of dissemination. I think that it is important to recognize that we have a very distributed software development environment, we don’t necessarily have centralized decisions, policies tools or coding conventions. The authors of software are not only CERN employees but also associates and other contributors from all around the world. This creates a lot of issues like ownership and dissemination very complex. Not only that, but as for a lot of software, you have a lot of component-based-software and this goes back to what Alexandra was saying earlier, it can be headache, to see how you may redistribute all of this. We are working on many different domains and different applications.
  • Hardware and Technology : It’s a bit like software, we also have a collaborative model, but usually on a smaller scale than software, usually it’s a bit more identifiable contributions, which generally makes it more manageable.
  • Knowledge transfer : Ultimately, the innovation performed by CERN has application outside of the world of particle physics, which brings us to knowledge transfer issues. Knowledge transfer is part of the mission of the organization. The goal is to bring the innovation to the outside world to maximize the dissemination and the impact. Where revenue is generated, you get a fair share, some patents and licenses are generated.
  • KT Framework : We had to put in place a policy framework to underpin all the transfer activities.

Software dissemination policy

We do have a soft dissemination policy, it’s a few years old, so it’s fairly recent. We do try to look at the software architecture, the contributors, the applications and use, and whether it relies on other OS modules. We do try to use Open Source as much as possible, to maximize the impact of dissemination, but we also adopt a proprietary approach, in instances where for example derivative works produced by non-experts can cause prejudice to the reputation of the organization or to the organization itself, where the application field is identified and there is an interest from a commercial partner.

We do try to take a pragmatic approach, we also look at who the developers, the scientists/users,… We also take into account the reputational risks but also the team’s aspirations and commitment. The reality of it all, is that OS is still misunderstood. There is a definition for what open source is, and often you find that people want something to be open source but at the same time they don’t want companies to use it, well that is not Open Source. There is a confusion between the software being open and distributing it and the OS freedoms.

At CERN, we don’t have a concern for military application, but that in itself goes against the OS principle. So if we were to say “it is open to everyone except for the military”, that wouldn’t be open source. So, we do have to compromise in certain places. The other reality is that sometimes OS is the easiest choice because there are hundreds of people contributing to the software from dozens of different institutes and it is just much easier to have everybody agree on an Open Source license. It is widely used and understood.

Hardware

We also need to take a pragmatic approach. There is a number of things we can do, but there are limits. We do open hardware because it allows us to specify the design, the get peer-review, to design re-use… which leads to improvement. This lead us to draft the CERN open hardware license, we are still working on it and testing new dissemination models.

Conclusion

OS is still not a well understood concept, even in places where openness is a core value. There are still conflicts between Open Science and the interest of researchers (for example, getting credit, getting funding, etc.). We also need to reconcile the interest of the stakeholders. Impact vs. dissemination are not necessarily the same thing and they are not achieved in the same way. Finally there is a role of places like CERN, which is to innovate not only in the scientific domain but also in the dissemination of legal knowledge. In the end there always might be some overarching constraints, like the no military application, but ultimately we need to be pragmatic about it.


4. Trace, Test, Treat: data failure in a pandemic - Carlo Piana

(00:58:40)

Carlo Piana is an Italian Free Software advocate and qualified IT lawyer based in Milano. A long standing advocate for open technologies and digital freedoms, he has been general counsel of the Free Software Foundation Europe and is in the OS and IP advisory to the United Nations’ Technology Innovation Laboratories and an editor of the Journal of Open Law, Technology, Society (JOLTS, formerly IFOSS aw Rev.). He advises on open source compliance, especially in the field of embedded devices, open data, data protection. His presentation will give us an insight on : the use of tracking application during covid-19 and the failure of this use of open source in Italy ; what an open source data collecting app could look like ; what and how could an open science approach have help in acceptance and adoption.>/p>

From what the headline states, I am bringing you a failure, an unsuccess story for the Italian tracing which is a component in the fight against COVID-19, which is dreadfully missing. This teaches us a lesson on what is open data, open science and OS in a world that goes from standalone issue to service-centric application. It is true that more than ever, in a pandemic we are impacted in our daily lives by scientific data.

This map is a map of the colored zones systems: it is supposed to be based upon scientific data, and tell what citizens where allowed or not to do depending on their region and the spread of Covid-19. I use the quotations for “scientific data” because to be deemed scientific this data should have had some characteristics that it doesn’t have here. They should be assessed by a scientific committee and we should know what the criteria for matching the data, the results, the findings and the decisions are. But in this case none of that is present. The question is : how is this data collected and shared ?

IMMUNI: a nation wide application

The data was relying on one single application called IMMUNI, it was selected by a scientific and technical working group and the award was granted in a way that nobody really understands. It was awarded in a cumbersome and non-competitive way. What interests us is that this is the only thing that is allowed to trace contacts and no other entities can offer similar services, this prohibition is enshrined in the law. So if a region or a municipality or a private person cannot offer a solution to track contacts otherwise they would be going against the law. The good news is that it was going to be an open source solution on GitHub and under GNU AGPL V3 but it’s not really open source. Surely it is not something I would call “scientific”. There are a lot of things missing there, there is no open source, no accessible backend, even the application which was thought to be an Open Source application using bluetooth technology of the phones, ended up being a frontend to proprietary libraries. The services are not offered by IMMUNI, but the tracking services are made by something we cannot inspect, that is provided by Android and iOS. Immuni is a façade of free software. I would call it “open-washing”, because it is very far from what we could call an open source application.

My requirement for it being called FOSS

We must not focus on the license but on the functions, what the application does, the workload that it performs, the API, everything that should be open and free. If it is not, it is not open. All communication between front-end and back-end should be based on open protocols, that everybody is allowed to use and read.

Simply using an Open Source license is not enough, we must think of a result to achieve. We must be able to build and make them work as official. All stack should be reproducible, anyone should be able to take it and make an identical build and compare them, being able to switch them in a similar environment, verify it and see what the build is made of. I presented that, and received a lot of pushback, so below are some objections.

Objections and replies

Objections Replies
Rogue applications : If you share source code, you can unleash rogue application which will go out and collect data. This is true to an extent but nobody really helps prevent that from happening, it is still possible to make an independent built with certification key
Overal security Public key for backend, so that you cannot publish anything that is fake or dangerous. You can also make a reproducible backend (yet unofficial)
Proprietary libraries that are necessary to use BLE phone backend No real solution to that, which shows that we do have a problem here, as we rely on oligopols

All evil what ends bad

It’s a long story with a non happy ending :

  • Small uptake : few millions out of 60 millions. We aimed at 40 millions in order to be able to run this application successfully. There are maybe 3 to 4 millions of people active.
  • Little data
  • Meaningless results : We are seeing covid infection tracing, but with a lot of delays 15 days to 20 days later do we get access to the infection data.
  • No tracing in place, practically
  • The only things that we were able to provide to people was only lockdown, red zones and hoping for a vaccine.

A parallel with electronic voting

I am totally against it, because you cannot insure all the characteristics but if you must think that if you want to make it open source, you must make sure that everything is at least reproducible, and have independent build and you can test and select the build that will work.


Question time

Thank you Carlo for this presentation, we see that science is everywhere, we are using science, data, to solve the issues we are facing. There are a lot of links between all of your presentations. The distributive collaborative work shows also the difficulties for lawyers and legal advisors to deal with all these different components.

Mike from chat : @CERN Does Open Source need Knowledge Transfer? May be Github.com marketing is enough?

Myriam Ayass : Actually, I don’t think it’s enough, for several reasons. Before you decide to go ope source you need to make sure that you can distribute under the conditions that you want. You need someone to check license compatibility, to do a little of due diligence, to make sure you’re not infringing the rights of somebody else, that all the actual owners and authors are involved in all of this. And this is what we do in the Knowledge transfer group. The second thing is we want to have maximum impact and very often, it s not enough to just put something out there for it to have an impact. Somebody who is looking for it will find it. But if we want to promote it, have a lot of people use it and have different applications, different technology domains, you need a bit more than just putting it on repository.

Célya Gruson-Daniel: Another question from the chat from Antoine Blanchard (Datactivist): “What software repository does CERN use? COMOKIT uses Github, how did you make that decision?”

Myriam Ayass: GitHub and Gitlab, we don’t have a general approach, other people in the organization might use other things but those are the general ones we use.

Alexis Drogoul: we chose GitHub for very practical reasons, since most developers are in developing countries we needed to have a worldwide infrastructure, able to guarantee correct download and upload times throughout the world.

Célya Gruson-Daniel: We have another question about the CERN open hardware licence : Myriam Ayass, what are the feedbacks on the use of the new CERN OHL v2 ? Are the license models use by a large community outside CERN’s physicists and users ?

Myriam Ayass: so far we get good feedback, it was released a bit less than a year ago. What we are really happy about is that it has just been OSI certified, we just submitted it to the open source initiative to check for compatibility with the open source definition. We worked hard on making sure it met OS definition, it is quite difficult to know about the adoption beyond CERN, but hopefully now that it has been certified, it could be one of the option in the repositories. Hopefully the adoption will be larger than what it is today.

Célya Gruson-Daniel: We have a question from Benjamin Jean about collaboration : Do you have some examples of collaboration with other institutions/ (from different countries) made feasible because of open source or open hardware model/licensing?

Alexis Drogoul : Actually it’s difficult to tell if a collaboration would have occurred despite closed source mechanisms, one of the reasons why we can establish collaboration with other countries, is the fact that they can reproduce and build things, so you don’t have to buy, yo can rebuild, adapt, copy. Concerning models it has been more or less mandatory for us, if we want people to use it, we need them to trust it, hence to see it and being able to adapt it. So there is no possible going back to a close collaboration.

Carlo Piana: Just a clarification, as I ran out of time I may not have made clear what reproducible is. Reproducible is very akin to a scientific process that can be reproduced. In an experiment you have all the tools to reproduce it and see if you end up with the same result, in software we need this to make sure that this application was made out of that thing, you give the source code, we can end up with the same binary which is incredibly difficult and necessitates a lot of effort to make sure the application is reproducible. Often misconsidered in science, because that is important.

Célya Gruson-Daniel : we very interesting, we can see that , from the point of view of activism, transparency and open data was very important. more and more the topic of reproducibility , will become a a main topic on governance and so on. for this part of this question i know that Alexandra you have exchanged in the discussion tab about the covid test leaks in the Netherlands, maybe you ca share with us the question.

Alexandra Giannopoulou: Yes, so the question was about the recent leaks of the covid tests in the Netherlands. There has been a security breach from the government’s database that stored the sensitive data. I think that especially because it was actually done bad job safeguarding the interests and rights of the constituents. While there is the legal tool to make sure the responsible people will face the appropriate liabilities according to the GDPR. However, this has also done a lot of damage for the trust of citizens towards the government and the use of the tools they use to handle this type of emergency.

Célya Gruson-Daniel: Thank you and yes I think this shows us how these issues of technological trust will be more and more debated and addressed. Benjamin Jean from Inno³ just shared with us a study about the use of Open Source in research. I know that your presentations also mention the difficulties between the theory and the practice. I think each of your presentations shows some issues and reasons of sharing and opening code sometimes, but it is not enough, it is also about resources, documentation and also for the lawyers, the difficulties to analyze all this OS code when you later want to do some knowledge transfer. Do you have other comments about your presentations or links to do between your presentations.

Carlo Piana : I want to praise OSOR for the good work they’re doing, another thing I want to praise : there is another institute in France i want to praise, which is Software Heritage which is a repository for software which ensures that source code of all the software used, will be preserved for the future. I think it is very important, because a lot of knowledge will be lost. Software is seen as a tool, but it isn’t, it’s knowledge, it is a part of a process of increasingly more scientific work, data, models, documenting the process of software. And if you produce non open source, preserve it and eventually make it available to software heritage.

Célya Gruson-Daniel: This issue of sustainability is also very important, and about software heritage, Roberto Di Cosmo who is the president of software heritage, did a presentation for FOSDEM, that can be viewed here.


Share this post