The 2080 RTX only has 10 TFLOP of compute power in FP32, 90 TFLOP in FP16 and MatMul. I’m machine learning engineer, and I can say it doesn‘t outperform a true dedicated GPU (RTX 2080, RTX 3070, 3080,n 3090, Tesla V100, Tesla A100) and it’s VERY far from out performing it. I’m not aware of any mean one can boot without UI on macOS. So you always have to deal with the ”very” high memory consumptive GUI of macOS Even when using it in server mode. These questions haunt me, as a service provider, your thoughts?Įxactly. could it out perform a current server? Would it be worth the premium price tag - if the benefits of computation were given to the user? Could it stand the calculations of querying a database? Could it out perform an intel chip in that arena? For all those "neural network" and AI algorithm developers. However, with M1, and it's separated computational architecture (that was impressive to type.), would it perform well under high stressful environments. Too much footprint and no real performance increase. I always wondered if moving to a series of Mac servers is a good idea or not. however, I've developed on a Mac for years. Traditionally speaking, service providers use our favorite flavor of Linux and load it up with only the necessary packages to keep the OS footprint low and the ram resources high. Hypothetically, has anyone seen impressive benchmarks on the M1 performing as a server? API/Web/Database/FTP/Email. I was tempted to name this thread, "GoFundMe: help me replace all of my servers."
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |