Spelling error: "As one MLPerf representative noted in our call, they organization essentially received results for every type of processor except for neuromorphic and analog systems." "The", not "they": "As one MLPerf representative noted in our call, the organization essentially received results for every type of processor except for neuromorphic and analog systems."
Thanks. Is there a reason only the Nvidia accelerated machines have results in all columns? I'd imagine all platforms should be able to run all the tests right?
It's up to the vendors to decide what tests they wish to submit results for. NVIDIA, presumably, was feeling confident about the flexibility of its wares.
"Intending to do for ML performance what SPEC has done for CPU and general system performance"
So - create a target for compiler optimization with little real world relevance AND - create a pool of internet weirdos who continue to insist that if the results don't match their company favorites then it's a useless benchmark of zero relevance ???
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
11 Comments
Back to Article
Elstar - Wednesday, November 6, 2019 - link
If they just used ML to find the best ML benchmark, they'd be at 1.0 by now.Amandtec - Wednesday, November 6, 2019 - link
Ha Ha. But what if the process snowballs ? By version 3 it might refuse to open the pod bay doors.ballsystemlord - Wednesday, November 6, 2019 - link
Spelling error:"As one MLPerf representative noted in our call, they organization essentially received results for every type of processor except for neuromorphic and analog systems."
"The", not "they":
"As one MLPerf representative noted in our call, the organization essentially received results for every type of processor except for neuromorphic and analog systems."
ballsystemlord - Wednesday, November 6, 2019 - link
So where are the individual numbers?ballsystemlord - Wednesday, November 6, 2019 - link
I mean per product, of course.Ryan Smith - Thursday, November 7, 2019 - link
Sorry, the URL for the results wasn't available at the time this story was being written.https://mlperf.org/inference-results/
webdoctors - Thursday, November 7, 2019 - link
Thanks. Is there a reason only the Nvidia accelerated machines have results in all columns? I'd imagine all platforms should be able to run all the tests right?Ryan Smith - Thursday, November 7, 2019 - link
It's up to the vendors to decide what tests they wish to submit results for. NVIDIA, presumably, was feeling confident about the flexibility of its wares.mode_13h - Wednesday, November 13, 2019 - link
Perhaps it had more to do with Nvidia being one of MLPerf's "reference" platforms.20x T4's in a single box? Wow, the fan noise has got to be up there with any blade server.
name99 - Thursday, November 7, 2019 - link
"Intending to do for ML performance what SPEC has done for CPU and general system performance"So
- create a target for compiler optimization with little real world relevance AND
- create a pool of internet weirdos who continue to insist that if the results don't match their company favorites then it's a useless benchmark of zero relevance
???
I kid. But sadly, not by much...
mode_13h - Wednesday, November 13, 2019 - link
Wow, serious respect for Cloud TPU v3. Not only does it post leading 1x performance numbers, but scaling also seems pretty good.Nvidia's next gen better be seriously good, or else they risk being run out of the AI market by purpose-built ASICs.