{ "@context":[ "https://www.w3.org/ns/activitystreams", {"Hashtag":"as:Hashtag"} ], "published":"2024-02-19T13:14:21.163Z", "attributedTo":"https://epiktistes.com/actors/toddsundsted", "replies":"https://epiktistes.com/objects/0RWLHH2TfvM/replies", "to":["https://www.w3.org/ns/activitystreams#Public"], "cc":["https://epiktistes.com/actors/toddsundsted/followers"], "content":"

i installed ollama. 20 minutes later i had llama2 running locally. almost all of that time was downloading the pre-trained model. performance and quality are better than i expected! (technical reference point: i'm on a macbook m3 max with 48gb of memory but that does not seem to be in any way required—i'm going to try some older hardware later.)

i'm still trying to sort out which llms are really open source (architecture, source code, training data, weights) and which are not.

#ollama #llama2

", "mediaType":"text/html", "attachment":[], "tag":[ {"type":"Hashtag","name":"#ollama","href":"https://epiktistes.com/tags/ollama"}, {"type":"Hashtag","name":"#llama2","href":"https://epiktistes.com/tags/llama2"} ], "type":"Note", "id":"https://epiktistes.com/objects/0RWLHH2TfvM" }