我尝试使用以下命令从 visual studio 代码中的终端运行grade_analysis.py:
~/documents/school/ml4t_2023fall/assess_portfolio$ pythonpath=../:. python grade_analysis.py
根据班级设置说明
但是,当我运行命令时,grade_analysis.py 似乎无法提升级别并从 grading.grading.py 文件中获取信息。
我使用这个命令是错误的还是遗漏了什么?
这是我收到的错误:
2023fall/assess_portfolio$ pythonpath=../:. python grade_analysis.py
traceback (most recent call last):
file "grade_analysis.py", line 20, in <module>
import pytest
file "/home/clopez/miniconda3/envs/ml4t/lib/python3.6/site-packages/pytest.py", line 34, in <module>
from _pytest.python_api import approx
file "/home/clopez/miniconda3/envs/ml4t/lib/python3.6/site-packages/_pytest/python_api.py", line 13, in <module>
from more_itertools.more import always_iterable
file "/home/clopez/miniconda3/envs/ml4t/lib/python3.6/site-packages/more_itertools/__init__.py", line 3, in <module>
from .more import * # noqa
file "/home/clopez/miniconda3/envs/ml4t/lib/python3.6/site-packages/more_itertools/more.py", line 5, in <module>
from functools import cached_property, partial, reduce, wraps
importerror: cannot import name 'cached_property'
环境设置说明
conda 环境 yml
name: ml4t
channels:
- conda-forge
- defaults
dependencies:
- python=3.6
- cycler=0.10.0
- kiwisolver=1.1.0
- matplotlib=3.0.3
- numpy=1.16.3
- pandas=0.24.2
- pyparsing=2.4.0
- python-dateutil=2.8.0
- pytz=2019.1
- scipy=1.2.1
- seaborn=0.9.0
- six=1.12.0
- joblib=0.13.2
- pytest=5.0
- pytest-json=0.4.0
- future=0.17.1
- pprofile=2.0.2
- pip
- pip:
- jsons==0.8.8
- gradescope-utils
- subprocess32
等级分析.py
"""MC1-P1: Analyze a portfolio - grading script.
Usage:
- Switch to a student feedback directory first (will write "points.txt" and "comments.txt" in pwd).
- Run this script with both ml4t/ and student solution in PYTHONPATH, e.g.:
PYTHONPATH=ml4t:MC1-P1/jdoe7 python ml4t/mc1_p1_grading/grade_analysis.py
Copyright 2017, Georgia Tech Research Corporation
Atlanta, Georgia 30332-0415
All Rights Reserved
"""
import datetime
import os
import sys
import traceback as tb
from collections import OrderedDict, namedtuple
import pandas as pd
import pytest
from grading.grading import (
GradeResult,
IncorrectOutput,
grader,
run_with_timeout,
)
from util import get_data
# Student code
# Spring '16 renamed package to just "analysis" (BPH)
main_code = "analysis" # module name to import
# Test cases
# Spring '16 test cases only check sharp ratio, avg daily ret, and cum_ret (BPH)
PortfolioTestCase = namedtuple(
"PortfolioTestCase", ["inputs", "outputs", "description"]
)
portfolio_test_cases = [
PortfolioTestCase(
inputs=dict(
start_date="2010-01-01",
end_date="2010-12-31",
symbol_allocs=OrderedDict(
[("GOOG", 0.2), ("AAPL", 0.3), ("GLD", 0.4), ("XOM", 0.1)]
),
start_val=1000000,
),
outputs=dict(
cum_ret=0.255646784534,
avg_daily_ret=0.000957366234238,
sharpe_ratio=1.51819243641,
),
description="Wiki example 1",
),
PortfolioTestCase(
inputs=dict(
start_date="2010-01-01",
end_date="2010-12-31",
symbol_allocs=OrderedDict(
[("AXP", 0.0), ("HPQ", 0.0), ("IBM", 0.0), ("HNZ", 1.0)]
),
start_val=1000000,
),
outputs=dict(
cum_ret=0.198105963655,
avg_daily_ret=0.000763106152672,
sharpe_ratio=1.30798398744,
),
description="Wiki example 2",
),
PortfolioTestCase(
inputs=dict(
start_date="2010-06-01",
end_date="2010-12-31",
symbol_allocs=OrderedDict(
[("GOOG", 0.2), ("AAPL", 0.3), ("GLD", 0.4), ("XOM", 0.1)]
),
start_val=1000000,
),
outputs=dict(
cum_ret=0.205113938792,
avg_daily_ret=0.00129586924366,
sharpe_ratio=2.21259766672,
),
description="Wiki example 3: Six month range",
),
PortfolioTestCase(
inputs=dict(
start_date="2010-01-01",
end_date="2013-05-31",
symbol_allocs=OrderedDict(
[("AXP", 0.3), ("HPQ", 0.5), ("IBM", 0.1), ("GOOG", 0.1)]
),
start_val=1000000,
),
outputs=dict(
cum_ret=-0.110888530433,
avg_daily_ret=-6.50814806831e-05,
sharpe_ratio=-0.0704694718385,
),
description="Normalization check",
),
PortfolioTestCase(
inputs=dict(
start_date="2010-01-01",
end_date="2010-01-31",
symbol_allocs=OrderedDict(
[("AXP", 0.9), ("HPQ", 0.0), ("IBM", 0.1), ("GOOG", 0.0)]
),
start_val=1000000,
),
outputs=dict(
cum_ret=-0.0758725033871,
avg_daily_ret=-0.00411578300489,
sharpe_ratio=-2.84503813366,
),
description="One month range",
),
PortfolioTestCase(
inputs=dict(
start_date="2011-01-01",
end_date="2011-12-31",
symbol_allocs=OrderedDict(
[("WFR", 0.25), ("ANR", 0.25), ("MWW", 0.25), ("FSLR", 0.25)]
),
start_val=1000000,
),
outputs=dict(
cum_ret=-0.686004563165,
avg_daily_ret=-0.00405018240566,
sharpe_ratio=-1.93664660013,
),
description="Low Sharpe ratio",
),
PortfolioTestCase(
inputs=dict(
start_date="2010-01-01",
end_date="2010-12-31",
symbol_allocs=OrderedDict(
[("AXP", 0.0), ("HPQ", 1.0), ("IBM", 0.0), ("HNZ", 0.0)]
),
start_val=1000000,
),
outputs=dict(
cum_ret=-0.191620333598,
avg_daily_ret=-0.000718040989619,
sharpe_ratio=-0.71237182415,
),
description="All your eggs in one basket",
),
PortfolioTestCase(
inputs=dict(
start_date="2006-01-03",
end_date="2008-01-02",
symbol_allocs=OrderedDict(
[("MMM", 0.0), ("MO", 0.9), ("MSFT", 0.1), ("INTC", 0.0)]
),
start_val=1000000,
),
outputs=dict(
cum_ret=0.43732715979,
avg_daily_ret=0.00076948918955,
sharpe_ratio=1.26449481371,
),
description="Two year range",
),
]
abs_margins = dict(
cum_ret=0.001, avg_daily_ret=0.00001, sharpe_ratio=0.001
) # absolute margin of error for each output
points_per_output = dict(
cum_ret=2.5, avg_daily_ret=2.5, sharpe_ratio=5.0
) # points for each output, for partial credit
points_per_test_case = sum(points_per_output.values())
max_seconds_per_call = 5
# Grading parameters (picked up by module-level grading fixtures)
max_points = float(len(portfolio_test_cases) * points_per_test_case)
html_pre_block = (
True # surround comments with HTML <pre class="brush:php;toolbar:false"> tag (for T-Square comments field)
)
# Test functon(s)
@pytest.mark.parametrize("inputs,outputs,description", portfolio_test_cases)
def test_analysis(inputs, outputs, description, grader):
"""Test get_portfolio_value() and get_portfolio_stats() return correct values.
Requires test inputs, expected outputs, description, and a grader fixture.
"""
points_earned = 0.0 # initialize points for this test case
try:
# Try to import student code (only once)
if not main_code in globals():
import importlib
# * Import module
mod = importlib.import_module(main_code)
globals()[main_code] = mod
# Unpack test case
start_date_str = inputs["start_date"].split("-")
start_date = datetime.datetime(
int(start_date_str[0]),
int(start_date_str[1]),
int(start_date_str[2]),
)
end_date_str = inputs["end_date"].split("-")
end_date = datetime.datetime(
int(end_date_str[0]), int(end_date_str[1]), int(end_date_str[2])
)
symbols = list(
inputs["symbol_allocs"].keys()
) # e.g.: ['GOOG', 'AAPL', 'GLD', 'XOM']
allocs = list(
inputs["symbol_allocs"].values()
) # e.g.: [0.2, 0.3, 0.4, 0.1]
start_val = inputs["start_val"]
risk_free_rate = inputs.get("risk_free_rate", 0.0)
# the wonky unpacking here is so that we only pull out the values we say we'll test.
def timeoutwrapper_analysis():
student_rv = analysis.assess_portfolio(
sd=start_date,
ed=end_date,
syms=symbols,
allocs=allocs,
sv=start_val,
rfr=risk_free_rate,
sf=252.0,
gen_plot=False,
)
return student_rv
result = run_with_timeout(
timeoutwrapper_analysis, max_seconds_per_call, (), {}
)
student_cr = result[0]
student_adr = result[1]
student_sr = result[3]
port_stats = OrderedDict(
[
("cum_ret", student_cr),
("avg_daily_ret", student_adr),
("sharpe_ratio", student_sr),
]
)
# Verify against expected outputs and assign points
incorrect = False
msgs = []
for key, value in port_stats.items():
if abs(value - outputs[key]) > abs_margins[key]:
incorrect = True
msgs.append(
" {}: {} (expected: {})".format(
key, value, outputs[key]
)
)
else:
points_earned += points_per_output[key] # partial credit
if incorrect:
inputs_str = (
" start_date: {}\n"
" end_date: {}\n"
" symbols: {}\n"
" allocs: {}\n"
" start_val: {}".format(
start_date, end_date, symbols, allocs, start_val
)
)
raise IncorrectOutput(
"One or more stats were incorrect.\n Inputs:\n{}\n Wrong"
" values:\n{}".format(inputs_str, "\n".join(msgs))
)
except Exception as e:
# Test result: failed
msg = "Test case description: {}\n".format(description)
# Generate a filtered stacktrace, only showing erroneous lines in student file(s)
tb_list = tb.extract_tb(sys.exc_info()[2])
for i in range(len(tb_list)):
row = tb_list[i]
tb_list[i] = (
os.path.basename(row[0]),
row[1],
row[2],
row[3],
) # show only filename instead of long absolute path
tb_list = [row for row in tb_list if row[0] == "analysis.py"]
if tb_list:
msg += "Traceback:\n"
msg += "".join(tb.format_list(tb_list)) # contains newlines
msg += "{}: {}".format(e.__class__.__name__, str(e))
# Report failure result to grader, with stacktrace
grader.add_result(
GradeResult(outcome="failed", points=points_earned, msg=msg)
)
raise
else:
# Test result: passed (no exceptions)
grader.add_result(
GradeResult(outcome="passed", points=points_earned, msg=None)
)
if __name__ == "__main__":
pytest.main(["-s", __file__])
我已激活 conda 环境并设置文件,以便它应该能够访问 util.py 文件和 grading.py 文件。
我希望运行命令后,analysis.py 文件将使用grade_analysis.py 进行评分。
正确答案
这就是为什么使用 conda-lock 锁文件(或容器化)比使用 yaml 更能实现长期可重复性。附加依赖项(如 more-itertools
)在 yaml 中不受限制,并且其他包的依赖项可能没有适当的上限。在这种情况下,op 最终得到了 more_itertools
模块的一个版本,该模块引用了后来才添加到 functools
的内容。
二分法显示了从 more_itertools
v10 开始的有问题的引用(对 cached_property
),因此设置上限应该可以解决问题:
name: ml4t
channels:
- conda-forge
- defaults
dependencies:
- python=3.6
- cycler=0.10.0
- kiwisolver=1.1.0
- matplotlib=3.0.3
- more-itertools<10 # <- prevent v10+
- numpy=1.16.3
- pandas=0.24.2
- pyparsing=2.4.0
- python-dateutil=2.8.0
- pytz=2019.1
- scipy=1.2.1
- seaborn=0.9.0
- six=1.12.0
- joblib=0.13.2
- pytest=5.0
- pytest-json=0.4.0
- future=0.17.1
- pprofile=2.0.2
- pip
- pip:
- jsons==0.8.8
- gradescope-utils
- subprocess32
使用此 yaml,并测试导致错误的导入现在可以正常工作:
$ python -c "from more_itertools.more import always_iterable"
$ echo $?
0
以上就是more_itertools 无法在 Python 3.6 中从 functools 导入cached_property的详细内容,更多请关注编程网其它相关文章!
免责声明:
① 本站未注明“稿件来源”的信息均来自网络整理。其文字、图片和音视频稿件的所属权归原作者所有。本站收集整理出于非商业性的教育和科研之目的,并不意味着本站赞同其观点或证实其内容的真实性。仅作为临时的测试数据,供内部测试之用。本站并未授权任何人以任何方式主动获取本站任何信息。
② 本站未注明“稿件来源”的临时测试数据将在测试完成后最终做删除处理。有问题或投稿请发送至: 邮箱/279061341@qq.com QQ/279061341
软考中级精品资料免费领
- 历年真题答案解析
- 备考技巧名师总结
- 高频考点精准押题
- 资料下载
- 历年真题
193.9 KB下载数265
191.63 KB下载数245
143.91 KB下载数1148
183.71 KB下载数642
644.84 KB下载数2756