Início

Questões de Concursos TCE SP

Resolva questões de TCE SP comentadas com gabarito, online ou em PDF, revisando rapidamente e fixando o conteúdo de forma prática.


1141Q401463 | Direito Administrativo, Procurador, TCE SP, FCC

A União Federal pretende implantar um gasoduto subterrâneo para transporte da produção de gás de uma região para outra. O trajeto do gasoduto atinge parcialmente imóveis particulares e imóveis públicos. Para materialização da obra pretendida, que acarretará restrição parcial do aproveitamento dos imóveis, a União deverá
  1. ✂️
  2. ✂️
  3. ✂️
  4. ✂️
  5. ✂️

1142Q619875 | Informática, Engenharia de Software, Agente de Fiscalização Financeira, TCE SP, FCC

O conceito de ator pode ser representado graficamente na UML 2.0 nos diagramas de

  1. ✂️
  2. ✂️
  3. ✂️
  4. ✂️
  5. ✂️

1143Q1032761 | Raciocínio Lógico, Equivalência Lógica e Negação de Proposições, Auditor de Controle Externo, TCE SP, VUNESP, 2025

Considere a seguinte afirmação:

Se a pessoa é trabalhadora e honesta, então a pessoa se sente realizada e olha para si mesma com admiração.

Uma afirmação que corresponde à negação lógica dessa afirmação é:

  1. ✂️
  2. ✂️
  3. ✂️
  4. ✂️
  5. ✂️

1144Q1032763 | Raciocínio Lógico, Quantificadores, Auditor de Controle Externo, TCE SP, VUNESP, 2025

Considere verdadeiras as afirmações:

I. Qualquer professor sabe ler.

II. Algumas pessoas que não frequentaram escola sabem ler.

A partir dessas, e somente dessas informações, é logicamente correto concluir que

  1. ✂️
  2. ✂️
  3. ✂️
  4. ✂️
  5. ✂️

1145Q1023290 | Inglês, Interpretação de Texto Reading Comprehension, TI, TCE SP, FGV, 2023

Texto associado.

READ THE TEXT AND ANSWER THE QUESTION:



Chatbots could be used to steal data, says cybersecurity agency


The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.


The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.


The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.


Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.


For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.


Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.


According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.


The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”


The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.


Adapted from: The Guardian, Wednesday 30 August 2023, page 4.

Based on the text, mark the statements below as true (T) or false (F).

( ) Chatbots have been trained to emulate human communication.
( ) Problems in cybersecurity have ceased to exist.
( ) Control over confidential data is still at risk.

The statements are, respectively:
  1. ✂️
  2. ✂️
  3. ✂️
  4. ✂️
  5. ✂️

1146Q1032768 | Raciocínio Lógico, Equivalência Lógica e Negação de Proposições, Administração, TCE SP, VUNESP, 2025

Considere a afirmação:

Se todas as pessoas são sensatas, então a convivência é pacífica e reina a felicidade.

Uma afirmação que seja equivalente lógica em relação à afirmação dada é:

  1. ✂️
  2. ✂️
  3. ✂️
  4. ✂️
  5. ✂️

1147Q1032762 | Raciocínio Lógico, Raciocínio Matemático, Auditor de Controle Externo, TCE SP, VUNESP, 2025

A sequência a seguir foi criada com um padrão lógico:

10, 20, 40, 20, 30, 60, 30, 40, 80, 40, 50, 100, 50, 60, 120, 60, ...

Considere a soma entre o 26º elemento e o 34º elemento. Subtraia essa soma do 33º elemento. O resultado dessa subtração é

  1. ✂️
  2. ✂️
  3. ✂️
  4. ✂️
  5. ✂️

1148Q1023294 | Inglês, Orações Condicionais Conditional Clauses, TI, TCE SP, FGV, 2023

Texto associado.

READ THE TEXT AND ANSWER THE QUESTION:



Chatbots could be used to steal data, says cybersecurity agency


The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.


The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.


The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.


Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.


For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.


Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.


According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.


The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”


The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.


Adapted from: The Guardian, Wednesday 30 August 2023, page 4.

“If” in “if they find a combination of words” (5th paragraph) signals a:
  1. ✂️
  2. ✂️
  3. ✂️
  4. ✂️
  5. ✂️

1149Q1032769 | Raciocínio Lógico, Sequências Lógicas de Números, Administração, TCE SP, VUNESP, 2025

A sequência a seguir foi criada com um padrão lógico:

4, 2, 2, 6, 2, 3, 8, 2, 2, 2, 9, 3, 3, 10, 2, 5, 12, 2, 2, 3, 14, 2, 7, 15, 3, 5, 16, 2, 2, 2, 2, 18, 2, 3, 3, 20, 2, 2, 4, 21, 3, 7, 22, 2, 11, 24, ...

Os números 36 e 44 pertencem a essa sequência. Considere os cinco elementos que aparecem imediatamente após o número 36 e considere também os quatro elementos que aparecem imediatamente após o número 44.

A soma desses nove elementos é um valor a partir do

  1. ✂️
  2. ✂️
  3. ✂️
  4. ✂️
  5. ✂️

1150Q1088548 | Direito Constitucional, Administração Pública, Administração, TCE SP, VUNESP, 2025

Por meio de uma emenda à Constituição Estadual, foi fixado, como limite único à remuneração dos servidores públicos estatutários do poder executivo e legislativo do Estado X, bem como dos municípios neste situados, inclusive para os subsídios dos deputados estaduais e dos vereadores, o subsídio mensal dos desembargadores do respectivo Tribunal de Justiça, equivalente a noventa inteiros e vinte e cinco centésimos por cento do subsídio mensal dos ministros do Supremo Tribunal Federal.

Acerca do caso hipotético, assinale a alternativa correta.

  1. ✂️
  2. ✂️
  3. ✂️
  4. ✂️
  5. ✂️

1151Q1023291 | Inglês, Interpretação de Texto Reading Comprehension, TI, TCE SP, FGV, 2023

Texto associado.

READ THE TEXT AND ANSWER THE QUESTION:



Chatbots could be used to steal data, says cybersecurity agency


The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.


The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.


The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.


Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.


For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.


Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.


According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.


The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”


The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.


Adapted from: The Guardian, Wednesday 30 August 2023, page 4.

The newspaper headline expresses the agency’s:
  1. ✂️
  2. ✂️
  3. ✂️
  4. ✂️
  5. ✂️

1152Q1066892 | Administração Pública, Gestão por Processos, Administração, TCE SP, VUNESP, 2025

Frente a diversas atividades a serem realizadas, determinado órgão público optou por fazer uso da Matriz GUT para estabelecer prioridades de ação. Ontem, durante o uso dessa ferramenta, um dos problemas (em relação aos demais problemas) recebeu a seguinte classificação: sem gravidade, ausência de pressa e que desaparece ou não piora ao longo do tempo. Assim, tradicionalmente, o produto da avaliação desse problema diz respeito a
  1. ✂️
  2. ✂️
  3. ✂️
  4. ✂️
  5. ✂️

1153Q1023292 | Inglês, Interpretação de Texto Reading Comprehension, TI, TCE SP, FGV, 2023

Texto associado.

READ THE TEXT AND ANSWER THE QUESTION:



Chatbots could be used to steal data, says cybersecurity agency


The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.


The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.


The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.


Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.


For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.


Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.


According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.


The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”


The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.


Adapted from: The Guardian, Wednesday 30 August 2023, page 4.

According to the text, attacks, scams and data theft are actions that should be:
  1. ✂️
  2. ✂️
  3. ✂️
  4. ✂️
  5. ✂️

1154Q1032770 | Raciocínio Lógico, Equivalência Lógica e Negação de Proposições, Administração, TCE SP, VUNESP, 2025

Considere a afirmação:

Se os jovens são a força da sociedade então os de meia idade são a sustentação.

Uma negação lógica para essa afirmação é:

  1. ✂️
  2. ✂️
  3. ✂️
  4. ✂️
  5. ✂️

1155Q1023293 | Inglês, Palavras Conectivas Connective Words, TI, TCE SP, FGV, 2023

Texto associado.

READ THE TEXT AND ANSWER THE QUESTION:



Chatbots could be used to steal data, says cybersecurity agency


The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.


The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.


The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.


Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.


For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.


Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.


According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.


The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”


The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.


Adapted from: The Guardian, Wednesday 30 August 2023, page 4.

In “Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard” (4th paragraph), “such as” introduces a(n):
  1. ✂️
  2. ✂️
  3. ✂️
  4. ✂️
  5. ✂️

1156Q1066891 | Administração Pública, Governabilidade, Administração, TCE SP, VUNESP, 2025

Entre os modelos ou paradigmas da Administração Pública, encontra-se a governança pública. A governança tem suas origens atreladas
  1. ✂️
  2. ✂️
  3. ✂️
  4. ✂️
  5. ✂️

1157Q1066894 | Administração Pública, Governabilidade, Administração, TCE SP, VUNESP, 2025

Parte da literatura defende que uma das limitações referentes ao governo eletrônico – o que pode afetar o controle social, a cidadania e a accountability, além de trazer à tona os conceitos de governo digital ou transformação digital – diz respeito à presença
  1. ✂️
  2. ✂️
  3. ✂️
  4. ✂️
  5. ✂️

1158Q1066893 | Administração Pública, Modelos Teóricos de Administração Pública, Administração, TCE SP, VUNESP, 2025

Na análise dos agentes políticos, uma vez que determinado plano plurianual (PPA) dispõe de programas bem planejados e elaborados, a sua má implementação tem sido causada por falhas da burocracia de nível de rua. Na literatura de políticas públicas, esse processo de responsabilização é denominado
  1. ✂️
  2. ✂️
  3. ✂️
  4. ✂️
  5. ✂️

1159Q1088544 | Direito Constitucional, Direitos Individuais, Administração, TCE SP, VUNESP, 2025

Considere que João foi convidado a ocupar um cargo em comissão no município X, mas foi informado, pelo setor de pessoal, que não poderia tomar posse, pois constava o registro de que teve suas contas rejeitadas, enquanto ordenador de despesas, pelo Tribunal de Contas. Convicto de que não tinha qualquer relação com o caso, após uma breve investigação, constatou tratar-se de um erro do tribunal. Ele apresentou um pedido administrativo para a retificação dos dados, que permaneceu sem resposta, o que o motivou a propor uma ação judicial para resolver o problema.

De acordo com a Constituição Federal, o remédio constitucional adequado para viabilizar a retificação dos dados é

  1. ✂️
  2. ✂️
  3. ✂️
  4. ✂️
  5. ✂️

1160Q1023297 | Inglês, Interpretação de Texto Reading Comprehension, TI, TCE SP, FGV, 2023

Texto associado.
Is It Live, or Is It Deepfake?


It’s been four decades since society was in awe of the quality of recordings available from a cassette recorder tape. Today we have something new to be in awe of: deepfakes. Deepfakes include hyperrealistic videos that use artificial intelligence (AI) to create fake digital content that looks and sounds real. The word is a portmanteau of “deep learning” and “fake.” Deepfakes are everywhere: from TV news to advertising, from national election campaigns to wars between states, and from cybercriminals’ phishing campaigns to insurance claims that fraudsters file. And deepfakes come in all shapes and sizes — videos, pictures, audio, text, and any other digital material that can be manipulated with AI. One estimate suggests that deepfake content online is growing at the rate of 400% annually.


There appear to be legitimate uses of deepfakes, such as in the medical industry to improve the diagnostic accuracy of AI algorithms in identifying periodontal disease or to help medical professionals create artificial patients (from real patient data) to safely test new diagnoses and treatments or help physicians make medical decisions. Deepfakes are also used to entertain, as seen recently on America’s Got Talent, and there may be future uses where deepfake could help teachers address the personal needs and preferences of specific students.


Unfortunately, there is also the obvious downside, where the most visible examples represent malicious and illegitimate uses. Examples already exist.


Deepfakes also involve voice phishing, also known as vishing, which has been among the most common techniques for cybercriminals. This technique involves using cloned voices over the phone to exploit the victim’s professional or personal relationships by impersonating trusted individuals. In March 2019, cybercriminals were able to use a deepfake to fool the CEO of a U.K.-based energy firm into making a US$234,000 wire transfer. The British CEO who was victimized thought that the person speaking on the phone was the chief executive of the firm’s German parent company. The deepfake caller asked him to transfer the funds to a Hungarian supplier within an hour, emphasizing that the matter was extremely urgent. The fraudsters used AI-based software to successfully imitate the German executive’s voice. […]


What can be done to combat deepfakes? Could we create deepfake detectors? Or create laws or a code of conduct that probably would be ignored?


There are tools that can analyze the blood flow in a subject’s face and then compare it to human blood flow activity to detect a fake. Also, the European Union is working on addressing manipulative behaviors.


There are downsides to both categories of solutions, but clearly something needs to be done to build trust in this emerging and disruptive technology. The problem isn’t going away. It is only increasing.


Authors


Nit Kshetri, Bryan School of Business and Economics, University of North Carolina at Greensboro, Greensboro, NC, USA


Joanna F. DeFranco, Software Engineering, The Pennsylvania State University, Malvern, PA, USA Jeffrey Voas, NIST, USA


Adapted from: https://www.computer.org/csdl/magazine/co/2023/07/10154234/ 1O1wTOn6ynC
Based on the text, mark the statements below as true (T) or false (F).

( ) Deepfakes are circumscribed to certain areas of action.
( ) The sole aim of deepfake technology is to spread misinformation.
( ) Evidence shows that even high-ranking executives can be easy targets to vishing techniques.

The statements are, respectively:
  1. ✂️
  2. ✂️
  3. ✂️
  4. ✂️
  5. ✂️
Utilizamos cookies e tecnologias semelhantes para aprimorar sua experiência de navegação. Política de Privacidade.