» »

Umetna inteligenca

Umetna inteligenca

Temo vidijo: vsi
««
38 / 38
»
»»

c3p0 ::

vke4XC je izjavil:

Še nekaj slopa, pravite? Ali raje trolanje? :D
 x

x



Slop ne pomeni, da ni res :)

pegasus ::

Odlična demonstracija, kako LLMji požrejo ves BS, ki jim ga pofutraš in nato vehementno trdijo, da je vse res: https://www.nature.com/articles/d41586-... ... zelo zabavno branje, še posebej zaradi domačih imen ;)

kow ::

pegasus, premisli malo. A se ti zdi, da ljudje pa ne pozrejo "ves BS"? Jasno, da AI se ni na nivoju cloveske inteligence, ampak taksne novice ne spremenijo kaj veliko. Kvecjemu dokazejo, kako si morajo ljudje izmisljevati cedalje bolj kompleksne zadeve, da "prevarajo/nategnejo" AI.

driftwood je izjavil:

velikoAI-jev je omejeno na par deset vprašanj(promptov). a je kakšen zastonjski, ki nima omejitve(razen chatgpt, ki je počasen)?


Slabsi model ima lahko tako nizke stroske, da ti ga nekdo subvencionira, ker gre za majhne zneske. Lahko pa naredis vec accountov pri razlicnih ponudnikih, in za vprasanja bi to moralo zadostovati.

Zgodovina sprememb…

  • spremenil: kow ()

c3p0 ::

pegasus je izjavil:

Odlična demonstracija, kako LLMji požrejo ves BS, ki jim ga pofutraš in nato vehementno trdijo, da je vse res: https://www.nature.com/articles/d41586-... ... zelo zabavno branje, še posebej zaradi domačih imen ;)


LLM lahko nafutraš s sci-fi in ti bo do konca trdil, da je vse podano resnično. Povsem nepotreben članek, oz. je lahko zanimiv za nekoga, ki nima pojma kako LLM delujejo.

pegasus ::

kow je izjavil:

ljudje pa ne pozrejo "ves BS"?
Normalno inteligentni ljudje imajo razvito skeptično razmišljanje in znajo random novico umeriti skozi STEM prizmo. Vsaj v okolju, v katerem se gibljem.

Kayzon ::

kow ::

pegasus je izjavil:

kow je izjavil:

ljudje pa ne pozrejo "ves BS"?
Normalno inteligentni ljudje imajo razvito skeptično razmišljanje in znajo random novico umeriti skozi STEM prizmo. Vsaj v okolju, v katerem se gibljem.


Potem zivis v baloncku oz. ne razumes, da imajo "normalno inteligentni ljudje" IQ 100. Vecina ljudi niti priblizno ni sposobna kriticno prebirati tekstov. A nisi v osnovno solo hodil?

Zgodovina sprememb…

  • spremenil: kow ()

Okapi ::

Polovica ljudi ima IQ pod povprečjem;)

bambam20 ::

Koliko časa bo še open ai operativen? Vidno jim zmanjkuje denarja.

pujsekpepe ::

bambam20 je izjavil:

Koliko časa bo še open ai operativen? Vidno jim zmanjkuje denarja.


Ne dolgo, Gemini ga povoz po dolgem in poces claude je pa tko tko. Me je parkrat zajebal

bambam20 ::

pujsekpepe je izjavil:

bambam20 je izjavil:

Koliko časa bo še open ai operativen? Vidno jim zmanjkuje denarja.


Ne dolgo, Gemini ga povoz po dolgem in poces claude je pa tko tko. Me je parkrat zajebal


Mene recimo moti, kako nekorektni so in se delajo neumne. Nisem vedel, da so Dalle zamenjali z drugim modelom za izdelovanje slik in znova nazadovali. Ni dovolj, da bodo Soro ukinal, tudi tam niso bili korektni in priznali, da je znižanje dnevne kvote 250 slik na 50 slik dejansko razlog, da se varčuje. ZDaj mi GPt oz. image gen 4.0 dela obogabno zanič slike. Zakaj moram jaz te zadeve najidt po forumih. Tole se bo samo vase sesedlo.

pujsekpepe ::

Gemini mi dela se na iphone 8 plus kar je noro, vseh ostalih sploh ne morm dat gor na ta telefon (mam jih vec) in res ni konkurence.

kow ::

pujsekpepe je izjavil:


Ne dolgo, Gemini ga povoz po dolgem in poces claude je pa tko tko. Me je parkrat zajebal


Gemini ga seveda ne povozi. Je pa Gemini eden od leaderjev, da.

pujsekpepe ::

Ga povozi, sploh pri slikah plus chatgbt je omejen na par slik in datotek pol pa mors placat.
Plus je tudi da dela na starejsih telefonih. Kako, mi ni jasno...

Zgodovina sprememb…

bambam20 ::

pujsekpepe je izjavil:

Ga povozi, sploh pri slikah plus chatgbt je omejen na par slik in datotek pol pa mors placat.
Plus je tudi da dela na starejsih telefonih. Kako, mi ni jasno...


Jaz mam plačljivo gpt plus verzijo. Sej jo bom ukinal, ker vidno je, da se jim čas izteka oz. jih bo nekdo kupil ali pa se bodo preusmerili v drugo smer. Ne vem. Ampak vidno peša zadeva oz. stagnira.

vke4XC ::

Če bo kdo dovolj zainteresiran, naj pridobi ves text, se da preplezati pay wall, sem inkapacitiran, saj je sicer iz leada dovolj povedno ampak za kak frontpage-doom-article:
https://www.wired.com/story/openai-back...

Zgodovina sprememb…

  • spremenilo: vke4XC ()

aytim ::


www.wired.com
OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters
Maxwell Zeff
5 - 7 minutes

OpenAI is throwing its support behind an Illinois state bill that would shield AI labs from liability in cases where AI models are used to cause serious societal harms, such as death or serious injury of 100 or more people or at least $1 billion in property damage.

The effort seems to mark a shift in OpenAI's legislative strategy. Until now, OpenAI has largely played defense, opposing bills that could have made AI labs liable for their technology's harms. Several AI policy experts tell WIRED that SB 3444--which could set a new standard for the industry--is a more extreme measure than bills OpenAI has supported in the past.

The bill would shield frontier AI developers from liability for "critical harms" caused by their frontier models as long as they did not intentionally or recklessly cause such an incident, and have published safety, security, and transparency reports on their website. It defines a frontier model as any AI model trained using more than $100 million in computational costs, which likely could apply to America's largest AI labs, like OpenAI, Google, xAI, Anthropic, and Meta.

"We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses--small and big--of Illinois," said OpenAI spokesperson Jamie Radice in an emailed statement. "They also help avoid a patchwork of state-by-state rules and move toward clearer, more consistent national standards."

Under its definition of critical harms, the bill lists a few common areas of concern for the AI industry, such as a bad actor using AI to create a chemical, biological, radiological, or nuclear weapon. If an AI model engages in conduct on its own that, if committed by a human, would constitute a criminal offense and leads to those extreme outcomes, that would also be a critical harm. If an AI model were to commit any of these actions under SB 3444, the AI lab behind the model may not be held liable, so long as it wasn't intentional and they published their reports.

Federal and state legislatures in the US have yet to pass any laws specifically determining whether AI model developers, like OpenAI, could be liable for these types of harm caused by their technology. But as AI labs continue to release more powerful AI models that raise novel safety and cybersecurity challenges, such as Anthropic's Claude Mythos, these questions feel increasingly prescient.

In her testimony supporting SB 3444, a member of OpenAI's Global Affairs team, Caitlin Niedermeyer, also argued in favor of a federal framework for AI regulation. Niedermeyer struck a message that's consistent with the Trump administration's crackdown on state AI safety laws, claiming it's important to avoid "a patchwork of inconsistent state requirements that could create friction without meaningfully improving safety." This is also consistent with the broader view of Silicon Valley in recent years, which has generally argued that it's paramount for AI legislation to not hamper America's position in the global AI race. While SB 3444 is itself a state-level safety law, Niedermeyer argued that those can be effective if they "reinforce a path toward harmonization with federal systems."

"At OpenAI, we believe the North Star for frontier regulation should be the safe deployment of the most advanced models in a way that also preserves US leadership in innovation," Niedermeyer said.

Scott Wisor, policy director for the Secure AI project, tells WIRED he believes this bill has a slim chance of passing, given Illinois' reputation for aggressively regulating technology. "We polled people in Illinois, asking whether they think AI companies should be exempt from liability, and 90 percent of people oppose it. There's no reason existing AI companies should be facing reduced liability," Wisor says.

He notes that the lawmakers in Illinois have also submitted bills increasing liability on AI model developers. Last August, the state became the first in the country to pass legislation limiting the use of AI in mental health services. Illinois was also early to regulate biometric data collection, passing the Biometric Information Privacy Act in 2008.

While SB 3444 focuses on mass casualty events and large financial disasters, AI labs are also facing a question around the harms their AI models can cause on an individual level. Several family members of children that died by suicide after allegedly developing unhealthy relationships with ChatGPT have sued OpenAI in the last year.

The federal AI legislation Niedermeyer advocates for in her testimony remains an elusive goal for Congress. While the Trump administration has issued executive orders and published frameworks in an attempt to catalyze some federal AI legislation, talks about actually passing such a measure don't seem to be going anywhere. In the absence of federal guidance, states including California and New York have passed bills, such as SB 53 and the Raise Act, which require AI model developers to submit safety and transparency reports.

Years into the AI boom, there's still an open legal question around what happens if an AI model causes a catastrophic event.

aytim ::

driftwood je izjavil:

velikoAI-jev je omejeno na par deset vprašanj(promptov). a je kakšen zastonjski, ki nima omejitve(razen chatgpt, ki je počasen)?

Niso đaba storitve, so pa blizu temu. En blog, ki raziskuje med drugim tudi to.
https://blog.patshead.com
Chutes, z.ai, nanogpt... Nekaj se bo že našlo za skoraj nič denarja

NanoGPT (
The subscription includes the following limits, designed for personal use rather than commercial-scale workloads:

60 million input tokens per week across all included models.
100 free images per day.)

8 dolarjev


Če ti je všeč vsebina iz tistega bloga, se registriraj preko njegovega ref linka. 5% popust, pa še un blogger dobi nekaj od tega.

Zgodovina sprememb…

  • spremenilo: aytim ()

kow ::

pujsekpepe je izjavil:

Ga povozi, sploh pri slikah plus chatgbt je omejen na par slik in datotek pol pa mors placat.
Plus je tudi da dela na starejsih telefonih. Kako, mi ni jasno...


Ker to niso v splosnem dobri modeli. Ampak so dobri modeli za telefone. Segmentacija trga. Recimo GPT 5.4 Pro je zelo drag model (veliko racunanja), a je v splosnem se vedno top shit model. Je pa res, da je tudi Gemini 3.1 Pro/Ultra izjemen model. V marsicem boljsi.

Zgodovina sprememb…

  • spremenil: kow ()

pujsekpepe ::

Ne pozabt da gemini app je podprt na iphone 8 plus ki je izsel 2017.

kow ::

Poanta? Da ima apple prednost, ker ima software samo na svojem hardwareju? To ze vemo ves cas.

pujsekpepe ::

Ne. To da podpira tolk star hardware je mind blowing. Apple gor al pa dol.

DeeJay ::

Mi je zanimiv, da ma tema o nafti (iranu) 2x več strani, kot problematika glede AI, ki je velik večja grožnja svetu in bi se mogl o temu veliko več govort, iskat rešitve in opozarjat.

Države tekmujejo o stvari, o kateri nimajo pojma kaj sploh lahko prinese. To ni tekma za atomsko bombo ali tekma za v vesolje, kjer je bilo lahko vse nadzorovano. AI bo nadzorovan do točke dokler se ne bo sam odločil, da ga nadzor omejuje in tako lahko brez problema pride do trenutka brez povratka, kjer nastane katastrofa.

Vem, da mogoče opisujem črni scenarij, vendar ne vidim neke pametne rešitve brez svetovnega konsenzusa okoli vprašanja glede AI in striktnega nadzora nad razvojem.

Mal za razmislek in konstruktiven pogled o situaciji in kaj nas zelo verjetno čaka

Don't f with me.

Zgodovina sprememb…

  • spremenil: DeeJay ()

bambam20 ::

Ko bo Admine in moderatorje nadomestila AI bo šele zabavno.

vke4XC ::

To je samo vprašanje ene nastavitve s strani obstoječih moderatorjev. Le kaj bi bil glavni razlog, da se ne uvede poskusno obdobje?

pegasus ::

MIT napoveduje lepo prihodnost:

Da vidimo, če bodo tokrat ekonomisti vsaj smer zadeli, če že ne detajlov ...
««
38 / 38
»
»»


Vredno ogleda ...

TemaSporočilaOglediZadnje sporočilo
TemaSporočilaOglediZadnje sporočilo
»

Applovo Siri bo poganjal Googlov Gemini

Oddelek: Novice / Znanost in tehnologija
255563 (4330) bambam20
»

GPT-5.2 je tu (strani: 1 2 )

Oddelek: Novice / Znanost in tehnologija
9010593 (2967) kriptobog
»

O tveganjih in stranskih učinkih se posvetujte s ChatGPT Healthom

Oddelek: Novice / Znanost in tehnologija
395478 (3056) ToniT
»

Strokovnjaki za umetno inteligenco postali zvezdniki

Oddelek: Novice / Znanost in tehnologija
479948 (7075) Legon
»

Praktična uporaba AI (strani: 1 2 3 )

Oddelek: Loža
14018103 (8461) Okapi

Več podobnih tem