Panos Louridas
2016-01-26 12:17:30 UTC
Hello,
A few points on the documentation / examples in the scikit-learn site:
* In the example that plots the decision surface of a decision tree on the Iris dataset (http://scikit-learn.org/stable/auto_examples/tree/plot_iris.html#example-tree-plot-iris-py), the dataset is initially shuffled and standardised. Is that necessary? Decision trees do not require data shuffling and standardisation, or am I mistaken?
* In the bias-variance decomposition example (http://scikit-learn.org/stable/auto_examples/ensemble/plot_bias_variance.html#example-ensemble-plot-bias-variance-py) it would be nice if the acronym “LS” were explained. Right now I can think of a couple of possibilities of what it might mean exactly.
* The FAQ link on the main page (http://scikit-learn.org/stable/faq/) is broken.
Thanks for you excellent work and best regards,
Panos.
A few points on the documentation / examples in the scikit-learn site:
* In the example that plots the decision surface of a decision tree on the Iris dataset (http://scikit-learn.org/stable/auto_examples/tree/plot_iris.html#example-tree-plot-iris-py), the dataset is initially shuffled and standardised. Is that necessary? Decision trees do not require data shuffling and standardisation, or am I mistaken?
* In the bias-variance decomposition example (http://scikit-learn.org/stable/auto_examples/ensemble/plot_bias_variance.html#example-ensemble-plot-bias-variance-py) it would be nice if the acronym “LS” were explained. Right now I can think of a couple of possibilities of what it might mean exactly.
* The FAQ link on the main page (http://scikit-learn.org/stable/faq/) is broken.
Thanks for you excellent work and best regards,
Panos.