AI control and science - economists survey (October 2023)
Q1
Do you think that it is useful for the scientific community to discuss the control of AI systems?
YES
Q2
Which are the advantages and disadvantages of promoting the development of AI systems that are more transparent, and where the scientific community participates in the control?
Better understanding of how they work and how to best use them. The scientific community can help study and explain the consequences for human society and progress of AI and to design rules to control it.
Q3
Should scientists interact preferentially with AI systems that have these features (more transparency and control participation) ?
No. Scientists have the skills and knowledge to deal with different kinds of AI systems and be able to judge and manage both positive and negative sides of AI. In a sense, they are those with among the lowest risks of being overwhelmed by AI.
Q4
How might the rise of these private AI systems affect the flow of ideas? What does this imply about data and scientific integrity in the future? What does this imply about the pace and distribution of economic growth?
1. They could both boost the flow of ideas or stifle creativity and progress of knowledge. It depends greatly on how and who uses AI systems. 2. Data and scientific integrity can be more at risk than before. 3. To the extent that AI can increase productivity and create new jobs, it can help economic growth. At the same time, AI will cause a reshuffling of job types, which could redistribute wealth in society.
Q5
What role should the public sector play in regulating these AI models, to maximize their benefits and minimize their risks?
The public sector could investigate how AI will affect the functioning of societies and its public institutions, starting from the democratic process and day-to-day formation and evolution of people's beliefs about social and economic issues of common interest. The public sector may leave more freedom to the private sector when it comes to business applications of AI.
Q6
What would be the economic consequences if public AI systems, dedicated to fundamental scientific knowledge, would co-exist with private AI efforts, focused on specific applications?
The two systems could support one another promoting knowledge, its applications, and its economic benefits. However, I find it a bit hard to fully comprehend this question and many details of this interaction between public and private system matter and are likely to determine different outcomes in different fields.
Q1
Do you think that it is useful for the scientific community to discuss the control of AI systems?
YES
Q2
Which are the advantages and disadvantages of promoting the development of AI systems that are more transparent, and where the scientific community participates in the control?
Understanding what is driving the models so that we can understanding what to fix when things go wrong.
Q3
Should scientists interact preferentially with AI systems that have these features (more transparency and control participation)?
Yes
Q4 Respondent skipped this question
How might the rise of these private AI systems affect the flow of ideas? What does this imply about data and scientific integrity in the future? What does this imply about the pace and distribution of economic growth?
Q5 Respondent skipped this question
What role should the public sector play in regulating these AI models, to maximize their benefits and minimize their risks?
Q6 Respondent skipped this question
What would be the economic consequences if public AI systems, dedicated to fundamental scientific knowledge, would co-exist with private AI efforts, focused on specific applications?
Q1
Do you think that it is useful for the scientific community to discuss the control of AI systems?
YES
Q2
Which are the advantages and disadvantages of promoting the development of AI systems that are more transparent, and where the scientific community participates in the control?
Advantage: discovery of patterns in phenomena that matter to our questions. Disadvantage: patterns that mislead investigation of phenomena that matter to us.
Q3
Should scientists interact preferentially with AI systems that have these features (more transparency and control participation) ?
Presuming that preference means access to costly resources, presumably allocation proceeds as it does with other costly resources.
Q4
How might the rise of these private AI systems affect the flow of ideas? What does this imply about data and scientific integrity in the future? What does this imply about the pace and distribution of economic growth?
These are important and useful questions. Surveys are not a place for answers.
Q5
What role should the public sector play in regulating these AI models, to maximize their benefits and minimize their risks?
Same comment as to #4
Q6
What would be the economic consequences if public AI systems, dedicated to fundamental scientific knowledge, would co-exist with private AI efforts, focused on specific applications?
Presumably positive.
Q1
Do you think that it is useful for the scientific community to discuss the control of AI systems?
YES
Q2
Which are the advantages and disadvantages of promoting the development of AI systems that are more transparent, and where the scientific community participates in the control?
More likely to anticipate and correct problems, safer against uses that benefit individuals against the interests of society.
Q3
Should scientists interact preferentially with AI systems that have these features (more transparency and control participation) ?
Yes
Q4
How might the rise of these private AI systems affect the flow of ideas? What does this imply about data and scientific integrity in the future? What does this imply about the pace and distribution of economic growth?
In general, they probably enhance the flow, but there are concerns about misinformation undermining scientific integrity. Growth could go either way.
Q5
What role should the public sector play in regulating these AI models, to maximize their benefits and minimize their risks?
Enforcing rules against disinformation and exploitation, also monopolization.
Q6
What would be the economic consequences if public AI systems, dedicated to fundamental scientific knowledge, would co-exist with private AI efforts, focused on specific applications?
Hard to say, but effects may not be different for AI.
Q1
Do you think that it is useful for the scientific community to discuss the control of AI systems?
YES
Q2
Which are the advantages and disadvantages of promoting the development of AI systems that are more transparent, and where the scientific community participates in the control?
It seems obvious to me, as far as I know, that sharing knowledge and getting it more transparent and better scientifically controlled is a matter of general and collective interest
Q3
Should scientists interact preferentially with AI systems that have these features (more transparency and control participation) ?
As researchers, we are obviously asked to stress any kind of new tech, but it would be better, for cultural spread and dissemination, to improve the safer and less problematic assets.
Q4
How might the rise of these private AI systems affect the flow of ideas? What does this imply about data and scientific integrity in the future? What does this imply about the pace and distribution of economic growth?
Any kind of company interest involved should be clearly declared and possibly checked.
Q5
What role should the public sector play in regulating these AI models, to maximize their benefits and minimize their risks?
Public research and industry of knowledge should be involved in the rules-making process, to avoid or limit both censorship and dominant market positions.
Q6
What would be the economic consequences if public AI systems, dedicated to fundamental scientific knowledge, would co-exist with private AI efforts, focused on specific applications?
The integration is obviously positive, but it also could be potentially difficult.
Q1
Do you think that it is useful for the scientific community to discuss the control of AI systems?
YES
Q2
Which are the advantages and disadvantages of promoting the development of AI systems that are more transparent, and where the scientific community participates in the control?
better understanding better choice
Q3
Should scientists interact preferentially with AI systems that have these features (more transparency and control participation) ?
yes
Q4
How might the rise of these private AI systems affect the flow of ideas? What does this imply about data and scientific integrity in the future? What does this imply about the pace and distribution of economic growth?
three completely different questions. 1. yes. 2. to be seen. 3. enormous potential for higher tfp growth
Q5
What role should the public sector play in regulating these AI models, to maximize their benefits and minimize their risks?
look for increases in monopoly power, difficult to know today where they may emerge.
Q6
What would be the economic consequences if public AI systems, dedicated to fundamental scientific knowledge, would co-exist with private AI efforts, focused on specific applications?
too general. But a non profit AI may be more trusted than a for profit one.
Q1
Do you think that it is useful for the scientific community to discuss the control of AI systems?
YES
Q2
Which are the advantages and disadvantages of promoting the development of AI systems that are more transparent, and where the scientific community participates in the control?
Greater understanding of the systems and more social control generally
Q3
Should scientists interact preferentially with AI systems that have these features (more transparency and control participation) ?
not sure
Q4
How might the rise of these private AI systems affect the flow of ideas? What does this imply about data and scientific integrity in the future? What does this imply about the pace and distribution of economic growth?
not certain
Q5
What role should the public sector play in regulating these AI models, to maximize their benefits and minimize their risks?
It should ban them.
Q6
What would be the economic consequences if public AI systems, dedicated to fundamental scientific knowledge, would co-exist with private AI efforts, focused on specific applications?
not certain
Q1
Do you think that it is useful for the scientific community to discuss the control of AI systems?
YES
Q2 Respondent skipped this question
Which are the advantages and disadvantages of promoting the development of AI systems that are more transparent, and where the scientific community participates in the control?
Q3
Should scientists interact preferentially with AI systems that have these features (more transparency and control participation) ?
Yes
Q4 Respondent skipped this question
How might the rise of these private AI systems affect the flow of ideas? What does this imply about data and scientific integrity in the future? What does this imply about the pace and distribution of economic growth?
Q5 Respondent skipped this question
What role should the public sector play in regulating these AI models, to maximize their benefits and minimize their risks?
Q6 Respondent skipped this question
What would be the economic consequences if public AI systems, dedicated to fundamental scientific knowledge, would co-exist with private AI efforts, focused on specific applications?
Q1
Do you think that it is useful for the scientific community to discuss the control of AI systems?
YES
Q2
Which are the advantages and disadvantages of promoting the development of AI systems that are more transparent, and where the scientific community participates in the control?
Quality control
Q3
Should scientists interact preferentially with AI systems that have these features (more transparency and control participation) ?
Not necessarily as long as there is disclosure.
Q4 Respondent skipped this question
How might the rise of these private AI systems affect the flow of ideas? What does this imply about data and scientific integrity in the future? What does this imply about the pace and distribution of economic growth?
Q5 Respondent skipped this question
What role should the public sector play in regulating these AI models, to maximize their benefits and minimize their risks?
Q6
What would be the economic consequences if public AI systems, dedicated to fundamental scientific knowledge, would co-exist with private AI efforts, focused on specific applications?
I think this is what will happen.
Q1
Do you think that it is useful for the scientific community to discuss the control of AI systems?
YES
Q2
Which are the advantages and disadvantages of promoting the development of AI systems that are more transparent, and where the scientific community participates in the control?
Developing AI systems with limited transparency risks reinforcing a Lucas critique dynamic in which we fail to understand why our models predict what they do, and therefore mistake results that hold only in a particular setting or policy regime from generalizable results. Having the scientific community participate in the control of these systems may help identify cases where AI answers provide a false sense of confidence on questions that do not have definitive answers.
Q3
Should scientists interact preferentially with AI systems that have these features (more transparency and control participation) ?
Yes.
Q4
How might the rise of these private AI systems affect the flow of ideas? What does this imply about data and scientific integrity in the future? What does this imply about the pace and distribution of economic growth?
Given the way large language models and other AI models ingest large quantities of data from the Internet, with seemingly few acknowledgements of original authorship, I worry that the rise of AI will limit the online publication of creative works, which may retreat behind paywalls in order to avoid the author's style being studied and imitated without attribution. Given the very large costs associated with training AI models, the market for a number of services which used to be performed by humans (such as translation, drafting, and editing) may become dominated by a few firms employing almost exclusively highly paid employees, leading to increasing wealth disparities as intermediate-skill tasks are taken up by machines. This seems likely to increase both growth and inequality.
Q5
What role should the public sector play in regulating these AI models, to maximize their benefits and minimize their risks?
The public sector should place clearer restrictions on how information posted on the Internet or otherwise recorded can and cannot be used to develop these models, and explicit restrictions on the use of AI that has not been rigorously tested in situations where AI bias may be a significant policy concern, such as policing, credit scoring, hiring decisions, etc.
Q6
What would be the economic consequences if public AI systems, dedicated to fundamental scientific knowledge, would co-exist with private AI efforts, focused on specific applications?
Honestly, I'm not sure.
Q1
Do you think that it is useful for the scientific community to discuss the control of AI systems?
YES
Q2
Which are the advantages and disadvantages of promoting the development of AI systems that are more transparent, and where the scientific community participates in the control?
Advantages are better synthesis and communication of research and ideas. Disadvantage is that there could be errors, or it may not be entirely transparent which data is used and if it is used consensually. This is despite best efforts to address these issues. Decisions will need to be made that will balance different benefits and risks and while I think the net benefit would be positive there could be winners and losers.
Q3
Should scientists interact preferentially with AI systems that have these features (more transparency and control participation) ?
Yes, transparency is very important.
Q4
How might the rise of these private AI systems affect the flow of ideas? What does this imply about data and scientific integrity in the future? What does this imply about the pace and distribution of economic growth?
It's a difficult balance since on the one hand there is too much market power held by only a few players, which leads to worse experiences for consumers. But then there is an element of natural monopolies here. It only makes sense to develop these when they are applied to large networks.
Q5
What role should the public sector play in regulating these AI models, to maximize their benefits and minimize their risks?
I don't think regulation is legally feasible or makes sense in most circumstances. I think it's important to keep up with the broader goals of antitrust policy and not necessarily exempt this industry from that. Funding and supporting research on these AI models and their impacts is important. This research has also been useful at promoting companies to change their algorithms when they have problematic output. Or the research informs us how to better engage with these tools.
Q6
What would be the economic consequences if public AI systems, dedicated to fundamental scientific knowledge, would co-exist with private AI efforts, focused on specific applications?
Duplication of effort and stifling private innovation in this sector. If this can be offered by the private market, and it's not more of a public good, then public AI systems may not be a good use of taxpayer dollars. But this could, depending on the circumstances, fall more into the public goods camp, and those can have huge externalities.
Q1
Do you think that it is useful for the scientific community to discuss the control of AI systems?
YES
Q2
Which are the advantages and disadvantages of promoting the development of AI systems that are more transparent, and where the scientific community participates in the control?
Privacy. Open science.
Q3
Should scientists interact preferentially with AI systems that have these features (more transparency and control participation) ?
Yes
Q4
How might the rise of these private AI systems affect the flow of ideas? What does this imply about data and scientific integrity in the future? What does this imply about the pace and distribution of economic growth?
Private AI systems will slow the flow of ideas.
Q5
What role should the public sector play in regulating these AI models, to maximize their benefits and minimize their risks?
There should be publicly owned AI initiatives to support privacy, ethical and open-science concerns
Q6
What would be the economic consequences if public AI systems, dedicated to fundamental scientific knowledge, would co-exist with private AI efforts, focused on specific applications?
The prospect of AI in economic research goes far beyond LLMs. Publicly funded research should be key in developing applications of AI to economic research.
Q1
Do you think that it is useful for the scientific community to discuss the control of AI systems?
YES
Q2
Which are the advantages and disadvantages of promoting the development of AI systems that are more transparent, and where the scientific community participates in the control?
Advantages: Transparency can bring out possible adverse impacts, and therefore research into how to deal with these. Disadvantages: Transparency can become an excuse to kill scientific progress.
Q3
Should scientists interact preferentially with AI systems that have these features (more transparency and control participation) ?
Yes
Q4
How might the rise of these private AI systems affect the flow of ideas? What does this imply about data and scientific integrity in the future? What does this imply about the pace and distribution of economic growth?
This can potentially stanch the flow of ideas; and worse, lead to their misuse. On the other hand, there may be increasing returns in concentrating large resources in a few places- Schumpeterian growth could even be higher. As with increasing returns settings, distributional consequences may not be that nice, without policy interventions.
Q5
What role should the public sector play in regulating these AI models, to maximize their benefits and minimize their risks?
Put in place a system in which the pace of change does not overwhelm human institutions.
Q6
What would be the economic consequences if public AI systems, dedicated to fundamental scientific knowledge, would co-exist with private AI efforts, focused on specific applications?
How come SpaceX is a leader in space technology and OpenAI, Stable Diffusion etc in AI? Public AI systems alone dedicated to fundamental scientific knowledge is so condescending! Govts underinvest hugely: look at how badly climate change is being fought relative to need. The proposed functional separation of public and private sector roles is restrictive and problematic.
Q1
Do you think that it is useful for the scientific community to discuss the control of AI systems?
YES
Q2
Which are the advantages and disadvantages of promoting the development of AI systems that are more transparent, and where the scientific community participates in the control?
I don't know
Q3
Should scientists interact preferentially with AI systems that have these features (more transparency and control participation) ?
I don't know
Q4
How might the rise of these private AI systems affect the flow of ideas? What does this imply about data and scientific integrity in the future? What does this imply about the pace and distribution of economic growth?
I don't know
Q5
What role should the public sector play in regulating these AI models, to maximize their benefits and minimize their risks?
I don't know
Q6
What would be the economic consequences if public AI systems, dedicated to fundamental scientific knowledge, would co-exist with private AI efforts, focused on specific applications?
I don't know
Q1
Do you think that it is useful for the scientific community to discuss the control of AI systems?
YES
Q2
Which are the advantages and disadvantages of promoting the development of AI systems that are more transparent, and where the scientific community participates in the control?
Scientists understand the technology. We do not have a good sense of societal values (we have our own values, but they may not be consistent with the diverse perspectives of our societies). I think there is a real danger that transparency leads to (1) worse systems and (2) systems that reflect the values of scientists but not the broader society.
Q3
Should scientists interact preferentially with AI systems that have these features (more transparency and control participation) ?
I don't understand what this means.
Q4
How might the rise of these private AI systems affect the flow of ideas? What does this imply about data and scientific integrity in the future? What does this imply about the pace and distribution of economic growth?
As long as there is sufficient competition, private AI systems will do more experimentation and attempt to be more useful than those controlled by academia or government. I worry about a small government bureaucracy deciding what can be built. It would slow the pace, and perhaps encourage the distribution to their favored constituencies. Government has a big role to play in fostering competition.
Q5
What role should the public sector play in regulating these AI models, to maximize their benefits and minimize their risks?
Government has a big role to play in fostering competition. Rigorous antitrust enforcement is important. Also, clear standards and rules for liability are needed. Micro-managing regulation is likely to be counterproductive for both efficiency and equity.
Q6
What would be the economic consequences if public AI systems, dedicated to fundamental scientific knowledge, would co-exist with private AI efforts, focused on specific applications?
Basic science is a key input into applications which in turn drive growth. So investment in basic science, that co-exists with private AI efforts and focuses on basic science applications, is useful.
Q1
Do you think that it is useful for the scientific community to discuss the control of AI systems?
YES
Q2
Which are the advantages and disadvantages of promoting the development of AI systems that are more transparent, and where the scientific community participates in the control?
advantages: democratic engagement, regulatory control, innovation diffusion. Disadvantages: reduce incentives for innovation
Q3
Should scientists interact preferentially with AI systems that have these features (more transparency and control participation) ?
Q4
How might the rise of these private AI systems affect the flow of ideas? What does this imply about data and scientific integrity in the future? What does this imply about the pace and distribution of economic growth?
same answer as to (4). BTW: the assumption that "fundamental scientific knowledge" has been public is flawed. Think about nuclear physics...
Q5
What role should the public sector play in regulating these AI models, to maximize their benefits and minimize their risks?
the public sector should deal with potential externalities of AI. This means to adapt current antitrust frameworks and develop the personnel to apply them.
Q6 Respondent skipped this question
What would be the economic consequences if public AI systems, dedicated to fundamental scientific knowledge, would co-exist with private AI efforts, focused on specific applications?
Q1
Do you think that it is useful for the scientific community to discuss the control of AI systems?
YES
Q2
Which are the advantages and disadvantages of promoting the development of AI systems that are more transparent, and where the scientific community participates in the control?
Transparency ensures attribution and can assist in determining beneficiaries and can allocate responsibility for harm caused.
Q3
Should scientists interact preferentially with AI systems that have these features (more transparency and control participation) ?
Yes - for obvious reasons.
Q4
How might the rise of these private AI systems affect the flow of ideas? What does this imply about data and scientific integrity in the future? What does this imply about the pace and distribution of economic growth?
These private systems can of course control the flow of ideas. It can have a huge, negative impact on scientific integrity. Data is becoming all-important and the indiscriminate use of data for LLMs infringe on rights of others - both economic and moral rights. The pace of economic growth and future prosperity will lie in the hands of those that can deploy and control AI systems.
Q5
What role should the public sector play in regulating these AI models, to maximize their benefits and minimize their risks?
Regulation is necessary to ensure that rights are respected. A regulatory system should be developed on the principles of transparency and accountability.
Q6
What would be the economic consequences if public AI systems, dedicated to fundamental scientific knowledge, would co-exist with private AI efforts, focused on specific applications?
The development of AI systems that can further fundamental scientific knowledge will be extremely valuable. If these systems co- exist there will be limited economic consequences as the purpose, function and market dynamics of these systems will be distinct.
Q1
Do you think that it is useful for the scientific community to discuss the control of AI systems?
YES
Q2
Which are the advantages and disadvantages of promoting the development of AI systems that are more transparent, and where the scientific community participates in the control?
Understanding of basic hypothesis and potential biases, contribution to science and learning, no disadvantages
Q3
Should scientists interact preferentially with AI systems that have these features (more transparency and control participation) ?
Q4
How might the rise of these private AI systems affect the flow of ideas? What does this imply about data and scientific integrity in the future? What does this imply about the pace and distribution of economic growth?
No idea about consequences but risks of excluding poorer countries.
Q5
What role should the public sector play in regulating these AI models, to maximize their benefits and minimize their risks?
Public sector probably needs increase their investment in AI to be able not only to regulate but provide alternative sources of funding to keep it a public good.
Q6
What would be the economic consequences if public AI systems, dedicated to fundamental scientific knowledge, would co-exist with private AI efforts, focused on specific applications?
Same thing as what has happened with the web, synergies between public (albeit military) and private initiatives.
Q1
Do you think that it is useful for the scientific community to discuss the control of AI systems?
YES
Q2
Which are the advantages and disadvantages of promoting the development of AI systems that are more transparent, and where the scientific community participates in the control?
Transparent systems (those that we can understand might be more limited than those we can't). But at the same time, there might be greater risks with the things we can't understand.
Q3
Should scientists interact preferentially with AI systems that have these features (more transparency and control participation) ?
I think about IRBs which guide research, and the underlying goal of doing research to maximize benefits and minimize harms. I would think in order for scientists do research responsibly, we need to interact AI systems that have these features (otherwise how could IRBs ever even imagine signing off).
Q4
How might the rise of these private AI systems affect the flow of ideas? What does this imply about data and scientific integrity in the future? What does this imply about the pace and distribution of economic growth?
There's going to be a lot more ideas going out, but because LLMs are basically high brow autocomplete, we risk getting stuck in paths that may be less than optimal. Ideally, they take some of the BS out of our job. In actuality, maybe they just create near infinite BS we have to screen out.
Q5
What role should the public sector play in regulating these AI models, to maximize their benefits and minimize their risks?
I think AI systems should be highly regulated in how much power they have until we can grasp or solve the human alignment problems with AI. We have to imagine not only how altruistic people will use general knowledge, but also how dangerous actors will.
Q6
What would be the economic consequences if public AI systems, dedicated to fundamental scientific knowledge, would co-exist with private AI efforts, focused on specific applications?
How do solve the compliance problems with true AI? If we don't solve alignment, then I think more targeted specific applications of AI are safer than more general ones.
Q1
Do you think that it is useful for the scientific community to discuss the control of AI systems?
YES
Q2
Which are the advantages and disadvantages of promoting the development of AI systems that are more transparent, and where the scientific community participates in the control?
I am not sure I understand the question because it assumes I understand some lingo about AI "control".
Q3
Should scientists interact preferentially with AI systems that have these features (more transparency and control participation) ?
Generally, I would think transparency is good, in particular if we can observe how and where the AI system we may be training by using is being employed.
Q4
How might the rise of these private AI systems affect the flow of ideas? What does this imply about data and scientific integrity in the future? What does this imply about the pace and distribution of economic growth?
It really depends on whether or not they can protect the trade secrets indefinitely. I find it unlikely this will be possible. Even if they succeed, proprietary parameters are valuable secrets if the AI system remains on cutting edge. My guess is the innovation will move a lot of faster in AI than say, the formula for Coca-Cola. In which case, failure to effectively keep trade secrets could mean there is more innovation than we would get if in the medium run if the details were instead patented and disclosed. It's also not clear these AI firms will be managed well enough to maintain their advantages. I don't have an opinion on data and scientific integrity.
Q5
What role should the public sector play in regulating these AI models, to maximize their benefits and minimize their risks?
I don't know enough to comment.
Q6
What would be the economic consequences if public AI systems, dedicated to fundamental scientific knowledge, would co-exist with private AI efforts, focused on specific applications?
If the public systems are dedicated to well-defined, specific applications it would probably be positive for economic growth as long as there was not too much encroachment on the private market for AI systems.
Q1
Do you think that it is useful for the scientific community to discuss the control of AI systems?
YES
Q2
Which are the advantages and disadvantages of promoting the development of AI systems that are more transparent, and where the scientific community participates in the control?
It is important that we think carefully about adopting new technologies.
Q3
Should scientists interact preferentially with AI systems that have these features (more transparency and control participation) ?
Yes
Q4
How might the rise of these private AI systems affect the flow of ideas? What does this imply about data and scientific integrity in the future? What does this imply about the pace and distribution of economic growth?
Lots of possibilities here
Q5
What role should the public sector play in regulating these AI models, to maximize their benefits and minimize their risks?
Weird survey. Not sure what you're asking here. Or how you are going to analyze this. Maybe use AI? Good luck!
Q6
What would be the economic consequences if public AI systems, dedicated to fundamental scientific knowledge, would co-exist with private AI efforts, focused on specific applications?
I don't know
Q1
Do you think that it is useful for the scientific community to discuss the control of AI systems?
YES
Q2
Which are the advantages and disadvantages of promoting the development of AI systems that are more transparent, and where the scientific community participates in the control?
I think it would reduce the risk of unintended outcomes.
Q3
Should scientists interact preferentially with AI systems that have these features (more transparency and control participation) ?
Not necessarily.
Q4
How might the rise of these private AI systems affect the flow of ideas? What does this imply about data and scientific integrity in the future? What does this imply about the pace and distribution of economic growth?
Negative effects in all those areas.
Q5
What role should the public sector play in regulating these AI models, to maximize their benefits and minimize their risks?
Ensure competition.
Q6
What would be the economic consequences if public AI systems, dedicated to fundamental scientific knowledge, would co-exist with private AI efforts, focused on specific applications?
Facilitate the flow of ideas.
Q1
Do you think that it is useful for the scientific community to discuss the control of AI systems?
YES
Q2
Which are the advantages and disadvantages of promoting the development of AI systems that are more transparent, and where the scientific community participates in the control?
Advantage: improve quality of AI systems. Disadvantage: slowing down their development
Q3
Should scientists interact preferentially with AI systems that have these features (more transparency and control participation) ?
I am not going to tell other scientists what to do. But it could well be in their own interest, yes.
Q4
How might the rise of these private AI systems affect the flow of ideas? What does this imply about data and scientific integrity in the future? What does this imply about the pace and distribution of economic growth?
These questions merit a lot of future research! The forces of competition may select the better available AI systems for widespread adoption, but it is of course a problem if research in the area is not open and transparent.
Q5
What role should the public sector play in regulating these AI models, to maximize their benefits and minimize their risks?
Another big research theme! When it comes to risk, it is important that there is a legal basis for suing providers of faulty (AI) products
Q6
What would be the economic consequences if public AI systems, dedicated to fundamental scientific knowledge, would co-exist with private AI efforts, focused on specific applications?
This might foster beneficial competition
#24 COMPLETE
Q1
Do you think that it is useful for the scientific community to discuss the control of AI systems?
YES
Q2
Which are the advantages and disadvantages of promoting the development of AI systems that are more transparent, and where the scientific community participates in the control?
Advantages include development of expertise and deeper understanding of the strengths and weaknesses of these systems. A possible disadvantage is focusing on the wrong thing.
Q3
Should scientists interact preferentially with AI systems that have these features (more transparency and control participation) ?
This seems necessary for replicability and scientific integrity.
Q4
How might the rise of these private AI systems affect the flow of ideas? What does this imply about data and scientific integrity in the future? What does this imply about the pace and distribution of economic growth?
I'm deeply worried about AI systems disintermediating traditional systems and firms that collect information, knowledge, and insight. Stackoverflow, which just laid off a quarter of its employees due to chatGPT, is an example. Scientific progress and integrity is at risk.
Q5
What role should the public sector play in regulating these AI models, to maximize their benefits and minimize their risks?
I don't see how the public sector can truly regulate AI models, but I think it can foster transparency provide a centralized resource for information about these systems.
Q6
What would be the economic consequences if public AI systems, dedicated to fundamental scientific knowledge, would co-exist with private AI efforts, focused on specific applications?
An obvious concern is that the private AI systems sequester data and code for the sake of profit. This seems analogous to Myriad Genetics not sharing its genetic data for scientific purposes. A great loss to medicine and patients.
Q1
Do you think that it is useful for the scientific community to discuss the control of AI systems?
YES
Q2
Which are the advantages and disadvantages of promoting the development of AI systems that are more transparent, and where the scientific community participates in the control?
Advantages- safeguards against possible harms, disadvantage- slows progress (or some entities won’t wait for scientific community to weigh in)
Q3
Should scientists interact preferentially with AI systems that have these features (more transparency and control participation) ?
Yes
Q4
How might the rise of these private AI systems affect the flow of ideas? What does this imply about data and scientific integrity in the future? What does this imply about the pace and distribution of economic growth?
AI has features of public goods and it has both positive and negative externalities depending on its application. If its development is private, we are unlikely to achieve socially optimal investments and will miss the opportunity to build in these investments.
Q5
What role should the public sector play in regulating these AI models, to maximize their benefits and minimize their risks?
Transparency regarding models should be required. Also, there should be ongoing assessments of the largest theoretical and practical harms and benefits. I do not know enough about technology regulations to say more.
Q6
What would be the economic consequences if public AI systems, dedicated to fundamental scientific knowledge, would co-exist with private AI efforts, focused on specific applications?
I think some public involvement is better than all private, but the private systems are likely to develop more quickly than public ones so it is likely we will still experience over investment in harmful AI and under investment in socially beneficial AI.
Q1
Do you think that it is useful for the scientific community to discuss the control of AI systems?
YES
Q2
Which are the advantages and disadvantages of promoting the development of AI systems that are more transparent, and where the scientific community participates in the control?
Imposing transparency in an unnuanced way could disincentivize development of AI systems, by increasing costs and risk. If transparency is a dialogue between the community and developers there's more room for constructive criticism and collaboration.
Q3
Should scientists interact preferentially with AI systems that have these features (more transparency and control participation) ?
Seems like these would be more in keeping with the values and incentives of our own community, so yes,
Q4 Respondent skipped this question
How might the rise of these private AI systems affect the flow of ideas? What does this imply about data and scientific integrity in the future? What does this imply about the pace and distribution of economic growth?
Q5
What role should the public sector play in regulating these AI models, to maximize their benefits and minimize their risks?
I would argue for regulating their uses, according to existing laws (about discrimination, deception, etc) rather than the models themselves.
Q6
What would be the economic consequences if public AI systems, dedicated to fundamental scientific knowledge, would co-exist with private AI efforts, focused on specific applications?
I think this already exists, in the form of open vs closed source models.
Q1
Do you think that it is useful for the scientific community to discuss the control of AI systems?
YES
Q2
Which are the advantages and disadvantages of promoting the development of AI systems that are more transparent, and where the scientific community participates in the control?
Advantages are greater public buy-in, better use of these systems and less potential for unanticipated effects. Disadvantages are that it will take time.
Q3
Should scientists interact preferentially with AI systems that have these features (more transparency and control participation) ?
It would be desirable (I think) but seems hard to enforce.
Q4
How might the rise of these private AI systems affect the flow of ideas? What does this imply about data and scientific integrity in the future? What does this imply about the pace and distribution of economic growth?
I'm not sure I know the answer. It seems to me that we cannot have "reproducible" science if we don't understand what happens inside these systems.
Q5
What role should the public sector play in regulating these AI models, to maximize their benefits and minimize their risks?
The ideal would be development of well-informed policies that protect IP that is being ingested into these systems, but does not hamstring their development.
Q5
What would be the economic consequences if public AI systems, dedicated to fundamental scientific knowledge, would co-exist with private AI efforts, focused on specific applications?
I don't know
Q1
Do you think that it is useful for the scientific community to discuss the control of AI systems?
YES
Q2
Which are the advantages and disadvantages of promoting the development of AI systems that are more transparent, and where the scientific community participates in the control?
More transparency would allow to better monitor the type and reach of surveillance and individual data collection, as well as potential sources of discrimination in predictive algorithm. Another area where greater transparency is crucial is knowing who is using what AI system -- in particular large companies and governments.
Q3
Should scientists interact preferentially with AI systems that have these features (more transparency and control participation) ?
Yes.
Q4
How might the rise of these private AI systems affect the flow of ideas? What does this imply about data and scientific integrity in the future? What does this imply about the pace and distribution of economic growth?
In addition to issues of data integrity and privacy and discrimination concerns, the control by corporations may severely affect the direction of research and lead researchers to make (often willingly) compromises that violates the rules of the scientific community.
Q5
What role should the public sector play in regulating these AI models, to maximize their benefits and minimize their risks?
Strong transparency requirement for underlying algorithm. Clear division of roles for scientists, especially in publicly funded academic institutions. The mixing of industry-like (including consulting, invited speeches, practitioner book writing etc.) and scientific activities is deeply problematic.
Q6
What would be the economic consequences if public AI systems, dedicated to fundamental scientific knowledge, would co-exist with private AI efforts, focused on specific applications?
Potential selection/sorting of researcher and more generally knowledge workers between industry and (publicly funded) academia. Not clear if this aversion may be beneficial or not. It might drive good researchers toward business, but could also clarify roles and keep them separated, which will increase transparency.
Q1
Do you think that it is useful for the scientific community to discuss the control of AI systems?
YES
Q2
Which are the advantages and disadvantages of promoting the development of AI systems that are more transparent, and where the scientific community participates in the control?
The big advantage I would see is the ability to learn from each other. If the goal of AI is to provide accurate information and limit hallucinations, then sharing insights across systems can help maximize that goal.
Q3
Should scientists interact preferentially with AI systems that have these features (more transparency and control participation) ?
yes. As people use AI, they should cite the AI they use. Support by scientists will hopefully help to encourage the use of these preferential AI systems
Q4
How might the rise of these private AI systems affect the flow of ideas? What does this imply about data and scientific integrity in the future? What does this imply about the pace and distribution of economic growth?
Although the systems are private, their access is public. I suspect the flow of ideas that use AI wouldn't be affected. But the flow of ideas across AI systems would presumably be restricted, thus making AI less effective than if shared and slowing the pace of economic growth.
Q5
What role should the public sector play in regulating these AI models, to maximize their benefits and minimize their risks?
I think a big role. The potential for misuse is high.
Q6
What would be the economic consequences if public AI systems, dedicated to fundamental scientific knowledge, would co-exist with private AI efforts, focused on specific applications?
It seems it would be more costly to develop the two side by side; I have no sense about the costs of developing these and how wasteful it might be.
Q1
Do you think that it is useful for the scientific community to discuss the control of AI systems?
YES
Q2
Which are the advantages and disadvantages of promoting the development of AI systems that are more transparent, and where the scientific community participates in the control?
Checkable results and Checkable news.
Q3
Should scientists interact preferentially with AI systems that have these features (more transparency and control participation) ?
yes
Q4
How might the rise of these private AI systems affect the flow of ideas? What does this imply about data and scientific integrity in the future? What does this imply about the pace and distribution of economic growth?
Many ways, both positive and negative.
Q5
What role should the public sector play in regulating these AI models, to maximize their benefits and minimize their risks?
Whatever is possible.
Q6
What would be the economic consequences if public AI systems, dedicated to fundamental scientific knowledge, would co-exist with private AI efforts, focused on specific applications?
If the public system verified results that would help a lot.
Q1
Do you think that it is useful for the scientific community to discuss the control of AI systems?
YES
Q2
Which are the advantages and disadvantages of promoting the development of AI systems that are more transparent, and where the scientific community participates in the control?
I dislike the verb "control." Economics and science are becoming increasingly politicized, in part via mechanisms that allow the tastes of the group to affect projects that individuals can pursue.
Q3
Should scientists interact preferentially with AI systems that have these features (more transparency and control participation) ?
Sure, if they want to.
Q4
How might the rise of these private AI systems affect the flow of ideas? What does this imply about data and scientific integrity in the future? What does this imply about the pace and distribution of economic growth?
I think there are issues analogous to those raised by proprietary data, except that they might be less severe with respect to AI tools in that there will probably be several providers of these tools, and also the option not to use them for many projects.
Q5
What role should the public sector play in regulating these AI models, to maximize their benefits and minimize their risks?
Nothing for the moment.
Q6
What would be the economic consequences if public AI systems, dedicated to fundamental scientific knowledge, would co-exist with private AI efforts, focused on specific applications?
Positive, if the public systems were any good.
Q1
Do you think that it is useful for the scientific community to discuss the control of AI systems?
YES
Q2
Which are the advantages and disadvantages of promoting the development of AI systems that are more transparent, and where the scientific community participates in the control?
Advantages: may reduce market power, and may reduce probability that truly awful outcome occurs (e.g., advanced machines turning on humanity); disadvantages are that "bad actors" may learn more about the systems.
Q3
Should scientists interact preferentially with AI systems that have these features (more transparency and control participation) ?
Not sure.
Q4
How might the rise of these private AI systems affect the flow of ideas? What does this imply about data and scientific integrity in the future? What does this imply about the pace and distribution of economic growth?
I am not sure.
Q5
What role should the public sector play in regulating these AI models, to maximize their benefits and minimize their risks?
I am not sure.
Q6
What would be the economic consequences if public AI systems, dedicated to fundamental scientific knowledge, would co-exist with private AI efforts, focused on specific applications?
I am not sure.
Q1
Do you think that it is useful for the scientific community to discuss the control of AI systems?
YES
Q2
Which are the advantages and disadvantages of promoting the development of AI systems that are more transparent, and where the scientific community participates in the control?
Compared with obvious alternatives (development of AI systems by governments or for-profit companies), scientific community has an advantage of being able to consider general welfare of the society as a whole. Also, the scientific community has a track record of developing useful technologies in the past if it is not influenced too much by governments and/or for-profit institutions.
Q3
Should scientists interact preferentially with AI systems that have these features (more transparency and control participation)?
Yes.
Q4
How might the rise of these private AI systems affect the flow of ideas? What does this imply about data and scientific integrity in the future? What does this imply about the pace and distribution of economic growth?
AI systems provide powerful tools for collecting information and advancing knowledge in many areas. The private AI systems, however, can retard the development of the AI technology itself.
Q5
What role should the public sector play in regulating these AI models, to maximize their benefits and minimize their risks?
The public sector obviously should try to balance the benefit of intellectual property protection (high incentive for technological advancement) and its cost (slow dissemination of useful knowledge). The current patent system has probably protected intellectual property rights too much, so the system ideally needs to be changed to increase dissemination.
Q6
What would be the economic consequences if public AI systems, dedicated to fundamental scientific knowledge, would co-exist with private AI efforts, focused on specific applications?
That would be a good start.