| Conditions | 3 |
| Total Lines | 52 |
| Lines | 0 |
| Ratio | 0 % |
| Changes | 2 | ||
| Bugs | 0 | Features | 2 |
Small methods make your code easier to understand, in particular if combined with a good name. Besides, if your method is small, finding a good name is usually much easier.
For example, if you find yourself adding comments to a method's body, this is usually a good sign to extract the commented part to a new method, and use the comment as a starting point when coming up with a good name for this new method.
Commonly applied refactorings include:
If many parameters/temporary variables are present:
| 1 | #! /usr/bin/env python |
||
| 6 | def bedroc_score(y_true, y_pred, decreasing=True, alpha=20.0): |
||
| 7 | |||
| 8 | """BEDROC metric implemented according to Truchon and Bayley. |
||
| 9 | |||
| 10 | The Boltzmann Enhanced Descrimination of the Receiver Operator |
||
| 11 | Characteristic (BEDROC) score is a modification of the Receiver Operator |
||
| 12 | Characteristic (ROC) score that allows for a factor of *early recognition*. |
||
| 13 | |||
| 14 | References: |
||
| 15 | The original paper by Truchon et al. is located at `10.1021/ci600426e |
||
| 16 | <http://dx.doi.org/10.1021/ci600426e>`_. |
||
| 17 | |||
| 18 | Args: |
||
| 19 | y_true (array_like): |
||
| 20 | Binary class labels. 1 for positive class, 0 otherwise. |
||
| 21 | y_pred (array_like): |
||
| 22 | Prediction values. |
||
| 23 | decreasing (bool): |
||
| 24 | True if high values of ``y_pred`` correlates to positive class. |
||
| 25 | alpha (float): |
||
| 26 | Early recognition parameter. |
||
| 27 | |||
| 28 | Returns: |
||
| 29 | float: |
||
| 30 | Value in interval [0, 1] indicating degree to which the predictive |
||
| 31 | technique employed detects (early) the positive class. |
||
| 32 | """ |
||
| 33 | |||
| 34 | assert len(y_true) == len(y_pred), \ |
||
| 35 | 'The number of scores must be equal to the number of labels' |
||
| 36 | |||
| 37 | N = len(y_true) |
||
| 38 | n = sum(y_true == 1) |
||
| 39 | |||
| 40 | if decreasing: |
||
| 41 | order = np.argsort(-y_pred) |
||
| 42 | else: |
||
| 43 | order = np.argsort(y_pred) |
||
| 44 | |||
| 45 | m_rank = (y_true[order] == 1).nonzero()[0] |
||
| 46 | |||
| 47 | s = np.sum(np.exp(-alpha * m_rank / N)) |
||
| 48 | |||
| 49 | r_a = n / N |
||
| 50 | |||
| 51 | rand_sum = r_a * (1 - np.exp(-alpha))/(np.exp(alpha/N) - 1) |
||
| 52 | |||
| 53 | fac = r_a * np.sinh(alpha / 2) / (np.cosh(alpha / 2) - np.cosh(alpha/2 - alpha * r_a)) |
||
| 54 | |||
| 55 | cte = 1 / (1 - np.exp(alpha * (1 - r_a))) |
||
| 56 | |||
| 57 | return s * fac / rand_sum + cte |
||
| 58 |
The coding style of this project requires that you add a docstring to this code element. Below, you find an example for methods:
If you would like to know more about docstrings, we recommend to read PEP-257: Docstring Conventions.