Discussion:
[Scikit-learn-general] SO question for the tree growers
Olivier Grisel
2013-04-04 19:09:15 UTC
Permalink
The variable importance in scikit-learn's implementation of random
forest is based on the proportion of samples that were classified by
the feature at some point in one of the decision trees evaluation.

http://scikit-learn.org/stable/modules/ensemble.html#feature-importance-evaluation

This method seems different from the OOB based method of Breiman 2001
(section 10):

http://www.stat.berkeley.edu/~breiman/randomforest2001.pdf

Is there any reference for the method implemented in the scikit?

Here is the original Stack Overflow question:

http://stackoverflow.com/questions/15810339/how-are-feature-importances-in-randomforestclassifier-determined/15811003?noredirect=1#comment22487062_15811003

--
Olivier
http://twitter.com/ogrisel - http://github.com/ogrisel
Peter Prettenhofer
2013-04-04 19:35:38 UTC
Permalink
I posted a brief description of the algorithm. The method that we implement
is briefly described in ESLII. Gilles is the expert here, he can give more
details on the issue.
Post by Olivier Grisel
The variable importance in scikit-learn's implementation of random
forest is based on the proportion of samples that were classified by
the feature at some point in one of the decision trees evaluation.
http://scikit-learn.org/stable/modules/ensemble.html#feature-importance-evaluation
This method seems different from the OOB based method of Breiman 2001
http://www.stat.berkeley.edu/~breiman/randomforest2001.pdf
Is there any reference for the method implemented in the scikit?
http://stackoverflow.com/questions/15810339/how-are-feature-importances-in-randomforestclassifier-determined/15811003?noredirect=1#comment22487062_15811003
--
Olivier
http://twitter.com/ogrisel - http://github.com/ogrisel
------------------------------------------------------------------------------
Minimize network downtime and maximize team effectiveness.
Reduce network management and security costs.Learn how to hire
the most talented Cisco Certified professionals. Visit the
Employer Resources Portal
http://www.cisco.com/web/learning/employer_resources/index.html
_______________________________________________
Scikit-learn-general mailing list
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
--
Peter Prettenhofer
Gilles Louppe
2013-04-04 21:11:33 UTC
Permalink
Hi Olivier,

There are indeed several ways to get feature "importances". As often, there
is no strict consensus about what this word means.

In our case, we implement the importance as described in [1] (often cited,
but unfortunately rarely read...). It is sometimes called "gini importance"
or "mean decrease impurity" and is defined as the total decrease in node
impurity (weighted by the probability of reaching that node (which is
approximated by the proportion of samples)) averaged over all trees of the
ensemble.

The other measure is the one you describe. It is sometimes called "mean
decrease accuracy". It is more intensive to compute since it requires
(repeated) random permutations of each feature. It also works only with
bootstrapping.

Note that both measures are available in the randomForest R package.

[1]: Breiman, Friedman, "Classification and regression trees", 1984.

I'll reply on SO as well.

Hope this helps,

Gilles
Post by Peter Prettenhofer
I posted a brief description of the algorithm. The method that we
implement is briefly described in ESLII. Gilles is the expert here, he can
give more details on the issue.
Post by Olivier Grisel
The variable importance in scikit-learn's implementation of random
forest is based on the proportion of samples that were classified by
the feature at some point in one of the decision trees evaluation.
http://scikit-learn.org/stable/modules/ensemble.html#feature-importance-evaluation
This method seems different from the OOB based method of Breiman 2001
http://www.stat.berkeley.edu/~breiman/randomforest2001.pdf
Is there any reference for the method implemented in the scikit?
http://stackoverflow.com/questions/15810339/how-are-feature-importances-in-randomforestclassifier-determined/15811003?noredirect=1#comment22487062_15811003
--
Olivier
http://twitter.com/ogrisel - http://github.com/ogrisel
------------------------------------------------------------------------------
Minimize network downtime and maximize team effectiveness.
Reduce network management and security costs.Learn how to hire
the most talented Cisco Certified professionals. Visit the
Employer Resources Portal
http://www.cisco.com/web/learning/employer_resources/index.html
_______________________________________________
Scikit-learn-general mailing list
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
--
Peter Prettenhofer
------------------------------------------------------------------------------
Minimize network downtime and maximize team effectiveness.
Reduce network management and security costs.Learn how to hire
the most talented Cisco Certified professionals. Visit the
Employer Resources Portal
http://www.cisco.com/web/learning/employer_resources/index.html
_______________________________________________
Scikit-learn-general mailing list
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
Olivier Grisel
2013-04-04 21:14:22 UTC
Permalink
Thank you to both of you! I learned something new today :)
P***@merckgroup.com
2013-04-05 11:35:34 UTC
Permalink
Dear Gilles,

sorry to jump into that discussion, but it raised my interest..
In the R RandomForest package, MeanDecreaseGini can be calculated.


Does scikit-learn somehow scale MeanDecreaseGini to the percentage scale.

Please find attached the variable importance as compute by scikit-learn's
RF & R's RF.




In the R case, I only had 10 features, but in the sklearn case, there were
a few more.
Of course, one cannot compare the absolute numbers of
VariableImportance/MeanDecreaseGini, but I'm astonished to get that large
values in the R implementation.


Cheers & Thanks,
Paul
Post by Gilles Louppe
Hi Olivier,
There are indeed several ways to get feature "importances". As
often, there is no strict consensus about what this word means.
In our case, we implement the importance as described in [1] (often
cited, but unfortunately rarely read...). It is sometimes called
"gini importance" or "mean decrease impurity" and is defined as the
total decrease in node impurity (weighted by the probability of
reaching that node (which is approximated by the proportion of
samples)) averaged over all trees of the ensemble.
The other measure is the one you describe. It is sometimes called
"mean decrease accuracy". It is more intensive to compute since it
requires (repeated) random permutations of each feature. It also
works only with bootstrapping.
Note that both measures are available in the randomForest R package.
[1]: Breiman, Friedman, "Classification and regression trees", 1984.
I'll reply on SO as well.
Hope this helps,
Gilles
I posted a brief description of the algorithm. The method that we
implement is briefly described in ESLII. Gilles is the expert here,
he can give more details on the issue.
The variable importance in scikit-learn's implementation of random
forest is based on the proportion of samples that were classified by
the feature at some point in one of the decision trees evaluation.
http://scikit-learn.org/stable/modules/ensemble.html#feature-
importance-evaluation
This method seems different from the OOB based method of Breiman 2001
http://www.stat.berkeley.edu/~breiman/randomforest2001.pdf
Is there any reference for the method implemented in the scikit?
http://stackoverflow.com/questions/15810339/how-are-feature-
importances-in-randomforestclassifier-determined/15811003?
noredirect=1#comment22487062_15811003
--
Olivier
http://twitter.com/ogrisel - http://github.com/ogrisel
------------------------------------------------------------------------------
Post by Gilles Louppe
Minimize network downtime and maximize team effectiveness.
Reduce network management and security costs.Learn how to hire
the most talented Cisco Certified professionals. Visit the
Employer Resources Portal
http://www.cisco.com/web/learning/employer_resources/index.html
_______________________________________________
Scikit-learn-general mailing list
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
--
Peter Prettenhofer
------------------------------------------------------------------------------
Post by Gilles Louppe
Minimize network downtime and maximize team effectiveness.
Reduce network management and security costs.Learn how to hire
the most talented Cisco Certified professionals. Visit the
Employer Resources Portal
http://www.cisco.com/web/learning/employer_resources/index.html
_______________________________________________
Scikit-learn-general mailing list
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
------------------------------------------------------------------------------
Post by Gilles Louppe
Minimize network downtime and maximize team effectiveness.
Reduce network management and security costs.Learn how to hire
the most talented Cisco Certified professionals. Visit the
Employer Resources Portal
http://www.cisco.com/web/learning/employer_resources/
index.html_______________________________________________
Scikit-learn-general mailing list
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
This message and any attachment are confidential and may be privileged or otherwise protected from disclosure. If you are not the intended recipient, you must not copy this message or attachment or disclose the contents to any other person. If you have received this transmission in error, please notify the sender immediately and delete the message and any attachment from your system. Merck KGaA, Darmstadt, Germany and any of its subsidiaries do not accept liability for any omissions or errors in this message which may arise as a result of E-Mail-transmission or for damages resulting from any unauthorized changes of the content of this message and any attachment thereto. Merck KGaA, Darmstadt, Germany and any of its subsidiaries do not guarantee that this message is free of viruses and does not accept liability for any damages caused by any virus transmitted therewith.

Click http://www.merckgroup.com/disclaimer to access the German, French, Spanish and Portuguese versions of this disclaimer.
Gilles Louppe
2013-04-05 20:57:03 UTC
Permalink
P***@merckgroup.com
2013-04-05 06:20:07 UTC
Permalink
Dear Gilles,

sorry to jump into that discussion, but it raised my interest..
In the R RandomForest package, MeanDecreaseGini can be calculated.


Does scikit-learn somehow scale MeanDecreaseGini to the percentage scale.

Please find attached the variable importance as compute by scikit-learn's
RF & R's RF.
(See attached file: RF_sklearn.png)
(See attached file: RF_R.png)


In the R case, I only had 10 features, but in the sklearn case, there were
a few more.
Of course, one cannot compare the absolute numbers of
VariableImportance/MeanDecreaseGini, but I'm astonished to get that large
values in the R implementation.


Cheers & Thanks,
Paul
Post by Gilles Louppe
Hi Olivier,
There are indeed several ways to get feature "importances". As
often, there is no strict consensus about what this word means.
In our case, we implement the importance as described in [1] (often
cited, but unfortunately rarely read...). It is sometimes called
"gini importance" or "mean decrease impurity" and is defined as the
total decrease in node impurity (weighted by the probability of
reaching that node (which is approximated by the proportion of
samples)) averaged over all trees of the ensemble.
The other measure is the one you describe. It is sometimes called
"mean decrease accuracy". It is more intensive to compute since it
requires (repeated) random  permutations of each feature. It also
works only with bootstrapping.
Note that both measures are available in the randomForest R package.
[1]: Breiman, Friedman, "Classification and regression trees", 1984.
I'll reply on SO as well.
Hope this helps,
Gilles
I posted a brief description of the algorithm. The method that we
implement is briefly described in ESLII. Gilles is the expert here,
he can give more details on the issue.
The variable importance in scikit-learn's implementation of random
forest is based on the proportion of samples that were classified by
the feature at some point in one of the decision trees evaluation.
http://scikit-learn.org/stable/modules/ensemble.html#feature-
importance-evaluation
This method seems different from the OOB based method of Breiman 2001
http://www.stat.berkeley.edu/~breiman/randomforest2001.pdf
Is there any reference for the method implemented in the scikit?
http://stackoverflow.com/questions/15810339/how-are-feature-
importances-in-randomforestclassifier-determined/15811003?
noredirect=1#comment22487062_15811003
--
Olivier
http://twitter.com/ogrisel - http://github.com/ogrisel
------------------------------------------------------------------------------
Post by Gilles Louppe
Minimize network downtime and maximize team effectiveness.
Reduce network management and security costs.Learn how to hire
the most talented Cisco Certified professionals. Visit the
Employer Resources Portal
http://www.cisco.com/web/learning/employer_resources/index.html
_______________________________________________
Scikit-learn-general mailing list
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
--
Peter Prettenhofer
------------------------------------------------------------------------------
Post by Gilles Louppe
Minimize network downtime and maximize team effectiveness.
Reduce network management and security costs.Learn how to hire
the most talented Cisco Certified professionals. Visit the
Employer Resources Portal
http://www.cisco.com/web/learning/employer_resources/index.html
_______________________________________________
Scikit-learn-general mailing list
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
------------------------------------------------------------------------------
Post by Gilles Louppe
Minimize network downtime and maximize team effectiveness.
Reduce network management and security costs.Learn how to hire
the most talented Cisco Certified professionals. Visit the
Employer Resources Portal
http://www.cisco.com/web/learning/employer_resources/index.html
_______________________________________________
Scikit-learn-general mailing list
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
This message and any attachment are confidential and may be privileged or
otherwise protected from disclosure. If you are not the intended recipient,
you must not copy this message or attachment or disclose the contents to
any other person. If you have received this transmission in error, please
notify the sender immediately and delete the message and any attachment
from your system. Merck KGaA, Darmstadt, Germany and any of its
subsidiaries do not accept liability for any omissions or errors in this
message which may arise as a result of E-Mail-transmission or for damages
resulting from any unauthorized changes of the content of this message and
any attachment thereto. Merck KGaA, Darmstadt, Germany and any of its
subsidiaries do not guarantee that this message is free of viruses and does
not accept liability for any damages caused by any virus transmitted
therewith.

Click http://www.merckgroup.com/disclaimer to access the German, French,
Spanish and Portuguese versions of this disclaimer.

unknown
1970-01-01 00:00:00 UTC
Permalink
--f46d043d67efa5820304d9a356cc
Content-Type: text/plain; charset=ISO-8859-1

Hi Paul,


sorry to jump into that discussion, but it raised my interest..
Post by P***@merckgroup.com
In the R RandomForest package, MeanDecreaseGini can be calculated.
Does scikit-learn somehow scale MeanDecreaseGini to the percentage scale.
Yes, in randomForest R package there is basically no scaling or
normalization.

In the RandomForest package, the mean decrease is the total weighted gini
decrease summed over all nodes splitting on that feature, averaged over all
trees. The gini decreases are weighted by the number of samples in the
corresponding nodes while in scikit-learn, they are weighted by the
proportion of samples. We use that definition to have a measure that is
independent from the number of samples. (But both are equivalent, modulo
some constant factor)

Also, in scikit-learn, the feature importances vector is normalized to have
unit norm while there is no such post-processing in randomforest R package.
Post by P***@merckgroup.com
Please find attached the variable importance as compute by scikit-learn's
RF & R's RF.
In the R case, I only had 10 features, but in the sklearn case, there were
a few more.
Of course, one cannot compare the absolute numbers of
VariableImportance/MeanDecreaseGini, but I'm astonished to get that large
values in the R implementation.
Please see my comments above. This is not surprising given the
normalization scheme we use.

Note that you should also consider the same sets of features for comparable
importances. Basically, since the importance of a feature measures
multi-variate effects, any relevant feature might affect the importance of
a feature. Therefore, using different feature sets might lead to
significantly different results.

Hope this answers some of your questions,

best,

Gilles
Post by P***@merckgroup.com
Cheers & Thanks,
Paul
Post by Gilles Louppe
Hi Olivier,
There are indeed several ways to get feature "importances". As
often, there is no strict consensus about what this word means.
In our case, we implement the importance as described in [1] (often
cited, but unfortunately rarely read...). It is sometimes called
"gini importance" or "mean decrease impurity" and is defined as the
total decrease in node impurity (weighted by the probability of
reaching that node (which is approximated by the proportion of
samples)) averaged over all trees of the ensemble.
The other measure is the one you describe. It is sometimes called
"mean decrease accuracy". It is more intensive to compute since it
requires (repeated) random permutations of each feature. It also
works only with bootstrapping.
Note that both measures are available in the randomForest R package.
[1]: Breiman, Friedman, "Classification and regression trees", 1984.
I'll reply on SO as well.
Hope this helps,
Gilles
I posted a brief description of the algorithm. The method that we
implement is briefly described in ESLII. Gilles is the expert here,
he can give more details on the issue.
The variable importance in scikit-learn's implementation of random
forest is based on the proportion of samples that were classified by
the feature at some point in one of the decision trees evaluation.
http://scikit-learn.org/stable/modules/ensemble.html#feature-
importance-evaluation
This method seems different from the OOB based method of Breiman 2001
http://www.stat.berkeley.edu/~breiman/randomforest2001.pdf
Is there any reference for the method implemented in the scikit?
http://stackoverflow.com/questions/15810339/how-are-feature-
importances-in-randomforestclassifier-determined/15811003?
noredirect=1#comment22487062_15811003
--
Olivier
http://twitter.com/ogrisel - http://github.com/ogrisel
------------------------------------------------------------------------------
Post by Gilles Louppe
Minimize network downtime and maximize team effectiveness.
Reduce network management and security costs.Learn how to hire
the most talented Cisco Certified professionals. Visit the
Employer Resources Portal
http://www.cisco.com/web/learning/employer_resources/index.html
_______________________________________________
Scikit-learn-general mailing list
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
--
Peter Prettenhofer
------------------------------------------------------------------------------
Post by Gilles Louppe
Minimize network downtime and maximize team effectiveness.
Reduce network management and security costs.Learn how to hire
the most talented Cisco Certified professionals. Visit the
Employer Resources Portal
http://www.cisco.com/web/learning/employer_resources/index.html
_______________________________________________
Scikit-learn-general mailing list
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
------------------------------------------------------------------------------
Post by Gilles Louppe
Minimize network downtime and maximize team effectiveness.
Reduce network management and security costs.Learn how to hire
the most talented Cisco Certified professionals. Visit the
Employer Resources Portal
http://www.cisco.com/web/learning/employer_resources/
index.html_______________________________________________
Scikit-learn-general mailing list
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
This message and any attachment are confidential and may be privileged or
otherwise protected from disclosure. If you are not the intended recipient,
you must not copy this message or attachment or disclose the contents to
any other person. If you have received this transmission in error, please
notify the sender immediately and delete the message and any attachment
from your system. Merck KGaA, Darmstadt, Germany and any of its
subsidiaries do not accept liability for any omissions or errors in this
message which may arise as a result of E-Mail-transmission or for damages
resulting from any unauthorized changes of the content of this message and
any attachment thereto. Merck KGaA, Darmstadt, Germany and any of its
subsidiaries do not guarantee that this message is free of viruses and does
not accept liability for any damages caused by any virus transmitted
therewith.
Click http://www.merckgroup.com/disclaimer to access the German, French,
Spanish and Portuguese versions of this disclaimer.
------------------------------------------------------------------------------
Minimize network downtime and maximize team effectiveness.
Reduce network management and security costs.Learn how to hire
the most talented Cisco Certified professionals. Visit the
Employer Resources Portal
http://www.cisco.com/web/learning/employer_resources/index.html
_______________________________________________
Scikit-learn-general mailing list
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
--f46d043d67efa5820304d9a356cc
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir="ltr">Hi Paul,<div><br></div><div><br></div><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">

sorry to jump into that discussion, but it raised my interest..<br>
In the R RandomForest package, MeanDecreaseGini can be calculated.<br>
<br>
<br>
Does scikit-learn somehow scale MeanDecreaseGini to the percentage scale.<br></blockquote><div><br></div><div style>Yes, in randomForest R package there is basically no scaling or normalization.</div><div style><br></div>

<div style>In the RandomForest package, the mean decrease is the total weighted gini decrease summed over all nodes splitting on that feature, averaged over all trees. The gini decreases are weighted by the number of samples in the corresponding nodes while in scikit-learn, they are weighted by the proportion of samples. We use that definition to have a measure that is independent from the number of samples. (But both are equivalent, modulo some constant factor)</div>

<div style><br></div><div style>Also, in scikit-learn, the feature importances vector is normalized to have unit norm while there is no such post-processing in randomforest R package. </div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">


<br>
Please find attached the variable importance as compute by scikit-learn&#39;s<br>
RF &amp; R&#39;s RF.<br>
<br>
<br>
<br>
<br>
In the R case, I only had 10 features, but in the sklearn case, there were<br>
a few more.<br>
Of course, one cannot compare the absolute numbers of<br>
VariableImportance/MeanDecreaseGini,  but I&#39;m astonished to get that large<br>
values in the R implementation.<br></blockquote><div><br></div><div style>Please see my comments above. This is not surprising given the normalization scheme we use.</div><div style><br></div><div style>Note that you should also consider the same sets of features for comparable importances. Basically, since the importance of a feature measures multi-variate effects, any relevant feature might affect the importance of a feature. Therefore, using different feature sets might lead to significantly different results.</div>

<div style><br></div><div style>Hope this answers some of your questions,</div><div style><br></div><div style>best,</div><div style><br></div><div style>Gilles</div><div style> </div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">


Cheers &amp; Thanks,<br>
Paul<br>
<div><div class="h5"><br>
<br>
<br>
&gt;<br>
&gt; Hi Olivier,<br>
&gt;<br>
&gt; There are indeed several ways to get feature &quot;importances&quot;. As<br>
&gt; often, there is no strict consensus about what this word means.<br>
&gt;<br>
&gt; In our case, we implement the importance as described in [1] (often<br>
&gt; cited, but unfortunately rarely read...). It is sometimes called<br>
&gt; &quot;gini importance&quot; or &quot;mean decrease impurity&quot; and is defined as the<br>
&gt; total decrease in node impurity (weighted by the probability of<br>
&gt; reaching that node (which is approximated by the proportion of<br>
&gt; samples)) averaged over all trees of the ensemble.<br>
&gt;<br>
&gt; The other measure is the one you describe. It is sometimes called<br>
&gt; &quot;mean decrease accuracy&quot;. It is more intensive to compute since it<br>
&gt; requires (repeated) random  permutations of each feature. It also<br>
&gt; works only with bootstrapping.<br>
&gt;<br>
&gt; Note that both measures are available in the randomForest R package.<br>
&gt;<br>
&gt; [1]: Breiman, Friedman, &quot;Classification and regression trees&quot;, 1984.<br>
&gt;<br>
&gt; I&#39;ll reply on SO as well.<br>
&gt;<br>
&gt; Hope this helps,<br>
&gt;<br>
&gt; Gilles<br>
&gt;<br>
&gt;<br>
<br>
&gt; On 4 April 2013 21:35, Peter Prettenhofer &lt;<a href="mailto:***@gmail.com">***@gmail.com</a><br>
&gt; &gt; wrote:<br>
&gt; I posted a brief description of the algorithm. The method that we<br>
&gt; implement is briefly described in ESLII. Gilles is the expert here,<br>
&gt; he can give more details on the issue.<br>
&gt;<br>
<br>
&gt; 2013/4/4 Olivier Grisel &lt;<a href="mailto:***@ensta.org">***@ensta.org</a>&gt;<br>
&gt; The variable importance in scikit-learn&#39;s implementation of random<br>
&gt; forest is based on the proportion of samples that were classified by<br>
&gt; the feature at some point in one of the decision trees evaluation.<br>
&gt;<br>
&gt; <a href="http://scikit-learn.org/stable/modules/ensemble.html#feature-" target="_blank">http://scikit-learn.org/stable/modules/ensemble.html#feature-</a><br>
&gt; importance-evaluation<br>
&gt;<br>
&gt; This method seems different from the OOB based method of Breiman 2001<br>
&gt; (section 10):<br>
&gt;<br>
&gt; <a href="http://www.stat.berkeley.edu/~breiman/randomforest2001.pdf" target="_blank">http://www.stat.berkeley.edu/~breiman/randomforest2001.pdf</a><br>
&gt;<br>
&gt; Is there any reference for the method implemented in the scikit?<br>
&gt;<br>
&gt; Here is the original Stack Overflow question:<br>
&gt;<br>
&gt; <a href="http://stackoverflow.com/questions/15810339/how-are-feature-" target="_blank">http://stackoverflow.com/questions/15810339/how-are-feature-</a><br>
&gt; importances-in-randomforestclassifier-determined/15811003?<br>
&gt; noredirect=1#comment22487062_15811003<br>
&gt;<br>
&gt; --<br>
&gt; Olivier<br>
&gt; <a href="http://twitter.com/ogrisel" target="_blank">http://twitter.com/ogrisel</a> - <a href="http://github.com/ogrisel" target="_blank">http://github.com/ogrisel</a><br>
&gt;<br>
&gt;<br>
------------------------------------------------------------------------------<br>
&gt; Minimize network downtime and maximize team effectiveness.<br>
&gt; Reduce network management and security costs.Learn how to hire<br>
&gt; the most talented Cisco Certified professionals. Visit the<br>
&gt; Employer Resources Portal<br>
&gt; <a href="http://www.cisco.com/web/learning/employer_resources/index.html" target="_blank">http://www.cisco.com/web/learning/employer_resources/index.html</a><br>
&gt; _______________________________________________<br>
&gt; Scikit-learn-general mailing list<br>
&gt; <a href="mailto:Scikit-learn-***@lists.sourceforge.net">Scikit-learn-***@lists.sourceforge.net</a><br>
&gt; <a href="https://lists.sourceforge.net/lists/listinfo/scikit-learn-general" target="_blank">https://lists.sourceforge.net/lists/listinfo/scikit-learn-general</a><br>
&gt;<br>
<br>
&gt;<br>
&gt; --<br>
&gt; Peter Prettenhofer<br>
&gt;<br>
&gt;<br>
------------------------------------------------------------------------------<br>
&gt; Minimize network downtime and maximize team effectiveness.<br>
&gt; Reduce network management and security costs.Learn how to hire<br>
&gt; the most talented Cisco Certified professionals. Visit the<br>
&gt; Employer Resources Portal<br>
&gt; <a href="http://www.cisco.com/web/learning/employer_resources/index.html" target="_blank">http://www.cisco.com/web/learning/employer_resources/index.html</a><br>
&gt; _______________________________________________<br>
&gt; Scikit-learn-general mailing list<br>
&gt; <a href="mailto:Scikit-learn-***@lists.sourceforge.net">Scikit-learn-***@lists.sourceforge.net</a><br>
&gt; <a href="https://lists.sourceforge.net/lists/listinfo/scikit-learn-general" target="_blank">https://lists.sourceforge.net/lists/listinfo/scikit-learn-general</a><br>
<br>
&gt;<br>
------------------------------------------------------------------------------<br>
&gt; Minimize network downtime and maximize team effectiveness.<br>
&gt; Reduce network management and security costs.Learn how to hire<br>
&gt; the most talented Cisco Certified professionals. Visit the<br>
&gt; Employer Resources Portal<br>
&gt; <a href="http://www.cisco.com/web/learning/employer_resources/" target="_blank">http://www.cisco.com/web/learning/employer_resources/</a><br>
&gt; index.html_______________________________________________<br>
&gt; Scikit-learn-general mailing list<br>
&gt; <a href="mailto:Scikit-learn-***@lists.sourceforge.net">Scikit-learn-***@lists.sourceforge.net</a><br>
&gt; <a href="https://lists.sourceforge.net/lists/listinfo/scikit-learn-general" target="_blank">https://lists.sourceforge.net/lists/listinfo/scikit-learn-general</a><br>
<br>
<br>
<br>
</div></div>This message and any attachment are confidential and may be privileged or otherwise protected from disclosure. If you are not the intended recipient, you must not copy this message or attachment or disclose the contents to any other person. If you have received this transmission in error, please notify the sender immediately and delete the message and any attachment from your system. Merck KGaA, Darmstadt, Germany and any of its subsidiaries do not accept liability for any omissions or errors in this message which may arise as a result of E-Mail-transmission or for damages resulting from any unauthorized changes of the content of this message and any attachment thereto. Merck KGaA, Darmstadt, Germany and any of its subsidiaries do not guarantee that this message is free of viruses and does not accept liability for any damages caused by any virus transmitted therewith.<br>


<br>
Click <a href="http://www.merckgroup.com/disclaimer" target="_blank">http://www.merckgroup.com/disclaimer</a> to access the German, French, Spanish and Portuguese versions of this disclaimer.<br>------------------------------------------------------------------------------<br>


Minimize network downtime and maximize team effectiveness.<br>
Reduce network management and security costs.Learn how to hire<br>
the most talented Cisco Certified professionals. Visit the<br>
Employer Resources Portal<br>
<a href="http://www.cisco.com/web/learning/employer_resources/index.html" target="_blank">http://www.cisco.com/web/learning/employer_resources/index.html</a><br>_______________________________________________<br>
Scikit-learn-general mailing list<br>
<a href="mailto:Scikit-learn-***@lists.sourceforge.net">Scikit-learn-***@lists.sourceforge.net</a><br>
<a href="https://lists.sourceforge.net/lists/listinfo/scikit-learn-general" target="_blank">https://lists.sourceforge.net/lists/listinfo/scikit-learn-general</a><br>
<br></blockquote></div><br></div></div>

--f46d043d67efa5820304d9a356cc--
Loading...