Very interesting topic (again) and also, a very current one! It's kind of scary to think that this whole process is happening with humans knowing so little about it and not having too much control over it either 😥😥
Yes, AI safety is a big concern nowadays. There is a whole field called AI alignment that deals with creating models whose objectives align with those of humanity. Pretty scary to even have to do this!
"En el lado negativo, esto significa que no tenemos idea de lo que el modelo ha aprendido"
Frase que me deja cuando menos pensativo. Parece que la realidad vuelve a superar a la ficción
Es muy interesante lo que está pasando con el aprendizaje de estos modelos. La práctica va por delante de la teoría: no podemos explicarlo!
Very interesting topic (again) and also, a very current one! It's kind of scary to think that this whole process is happening with humans knowing so little about it and not having too much control over it either 😥😥
Yes, AI safety is a big concern nowadays. There is a whole field called AI alignment that deals with creating models whose objectives align with those of humanity. Pretty scary to even have to do this!
https://en.wikipedia.org/wiki/AI_alignment
HAL?
Me falta cultura 😂