Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
| Download
"Guiding Future STEM Leaders through Innovative Research Training" ~ thinkingbeyond.education
Project: stephanie's main branch
Path: ThinkingBeyond Activities / BeyondAI-2024-Mentee-Projects / palak-sumayah / Moon_Dataset_(High_Noise).ipynb
Views: 1186Image: ubuntu2204
Kernel: Python 3
In [6]:
Out[6]:
In [7]:
Out[7]:
Accuracy on the test set: 0.81
Classification Metrics:
precision recall f1-score support
0 0.84 0.79 0.81 156
1 0.78 0.83 0.81 144
accuracy 0.81 300
macro avg 0.81 0.81 0.81 300
weighted avg 0.81 0.81 0.81 300
Runtime for training and evaluation: 0.0076 seconds
In [ ]:
Accuracy on the test set: 0.76
Classification Metrics:
precision recall f1-score support
0 0.83 0.67 0.74 156
1 0.71 0.85 0.77 144
accuracy 0.76 300
macro avg 0.77 0.76 0.76 300
weighted avg 0.77 0.76 0.76 300
Runtime for training and evaluation: 0.0045 seconds
In [ ]:
Accuracy on the test set: 0.82
Classification Metrics:
precision recall f1-score support
0 0.82 0.83 0.82 156
1 0.81 0.81 0.81 144
accuracy 0.82 300
macro avg 0.82 0.82 0.82 300
weighted avg 0.82 0.82 0.82 300
Runtime for training and evaluation: 0.0216 seconds
In [5]:
Out[5]:
Accuracy on the test set: 0.78
Classification Metrics:
precision recall f1-score support
0 0.78 0.80 0.79 156
1 0.78 0.76 0.77 144
accuracy 0.78 300
macro avg 0.78 0.78 0.78 300
weighted avg 0.78 0.78 0.78 300
Runtime for training and evaluation: 0.2201 seconds
In [4]:
Out[4]:
Accuracy on the test set: 0.82
Classification Metrics:
precision recall f1-score support
0 0.82 0.83 0.83 156
1 0.82 0.81 0.81 144
accuracy 0.82 300
macro avg 0.82 0.82 0.82 300
weighted avg 0.82 0.82 0.82 300
Runtime for training and evaluation: 4.8854 seconds
In [3]:
Out[3]:
Accuracy on the test set: 0.81
Classification Metrics:
precision recall f1-score support
0 0.84 0.79 0.81 156
1 0.79 0.84 0.81 144
accuracy 0.81 300
macro avg 0.81 0.81 0.81 300
weighted avg 0.82 0.81 0.81 300
Runtime for training and evaluation: 0.0017 seconds
In [2]:
Out[2]:
Accuracy on the test set: 0.79
Classification Metrics:
precision recall f1-score support
0 0.80 0.81 0.80 156
1 0.79 0.78 0.78 144
accuracy 0.79 300
macro avg 0.79 0.79 0.79 300
weighted avg 0.79 0.79 0.79 300
Runtime for training and evaluation: 0.2142 seconds
In [1]:
Out[1]:
Accuracy on the test set: 0.73
Classification Metrics:
precision recall f1-score support
0 0.76 0.71 0.73 156
1 0.71 0.75 0.73 144
accuracy 0.73 300
macro avg 0.73 0.73 0.73 300
weighted avg 0.73 0.73 0.73 300
Runtime for training and evaluation: 0.0044 seconds