联通手机信令大数据的处理分析与可视化
发布日期:2021-06-29 19:49:27
浏览次数:3
分类:技术文章
本文共 4273 字,大约阅读时间需要 14 分钟。
我有联通的2020年扩样后的具体迁徙人数数据,包括所有城市
如果需要的话请到我其他文章找到我的qq数据处理代码:
import pandas as pdimport osfrom utils.read_write import eachFile, pdReadCsv'''每个社区到达商圈的平均人口数, #3代表节假日#2代表周末 #1代表工作日 * START_GRID_ID 起始网格编号 string * START_CITY 起始城市 string * END_GRID_ID 到达网格编号 string * END_CITY 到达城市 string * date 日期 string * START_TYPE 起始人口类型 string 01-到访 02-居住 03-工作 05职住重合 * END_TYPE 到达人口类型 string 01-到访 02-居住 03-工作 * POP 人数 int * times 次数'''# def test():# filepath = os.path.join(root+'000054_0_weekend.txt')# data = pd.read_csv(filepath, sep='|', usecols=[0, 2, 4, 7], error_bad_lines=False, engine='python')# column = ['START_GRID_ID', 'END_GRID_ID', 'date', 'pop']# data.columns = column# # data = data[data['date'].isin([20191013])]# workFromCom = pd.merge(data, community, left_on='START_GRID_ID', right_on='YGA_Grid_1', how='right')# workFromComToMall = pd.merge(workFromCom, mall, left_on='END_GRID_ID', right_on='YGA_Grid_1', how='right')# workGroup = workFromComToMall.groupby(['SQCODE', 'mall_name']).agg({'pop': sum})# csv = workGroup['pop'].apply(lambda x: int(x / 5))# csv.to_csv(filepath + 'holidayFromCommunityToMall.csv', mode='a')def read_file(dirpath): filepath = os.path.join(dirpath) print(dirpath) data = pd.read_csv(filepath, sep='|', usecols=[0, 2, 4, 7], error_bad_lines=False, engine='python') column = ['START_GRID_ID', 'END_GRID_ID', 'date', 'pop'] data.columns = column weekend = data[data['date'] == 20191013] workFromCom = pd.merge(weekend, community, left_on='START_GRID_ID', right_on='YGA_Grid_1', how='right') workFromComToMall = pd.merge(workFromCom, mall, left_on='END_GRID_ID', right_on='YGA_Grid_1', how='right') workGroup = workFromComToMall.groupby([ 'SQCODE', 'mall_name']).agg({ 'pop': sum}) csv = workGroup['pop'].apply(lambda x: int(x / 5)) csv.to_csv(save + 'weekendFromCommunityToMall.csv', mode='a',header=False,index=True) holiday = data[data['date'] < 20191008] workFromCom = pd.merge(holiday, community, left_on='START_GRID_ID', right_on='YGA_Grid_1', how='right') workFromComToMall = pd.merge(workFromCom, mall, left_on='END_GRID_ID', right_on='YGA_Grid_1', how='right') workGroup = workFromComToMall.groupby([ 'SQCODE', 'mall_name']).agg({ 'pop': sum}) csv = workGroup['pop'].apply(lambda x: int(x / 5)) csv.to_csv(save + 'holidayFromCommunityToMall.csv', mode='a',header=False,index=True) work = data[(data['date'] > 20191007) & (data['date'] != 20191013)] workFromCom = pd.merge(work, community, left_on='START_GRID_ID', right_on='YGA_Grid_1', how='right') workFromComToMall = pd.merge(workFromCom, mall, left_on='END_GRID_ID', right_on='YGA_Grid_1', how='right') workGroup = workFromComToMall.groupby([ 'SQCODE', 'mall_name']).agg({ 'pop': sum}) csv = workGroup['pop'].apply(lambda x: int(x / 5)) csv.to_csv(save + 'workFromCommunityToMall.csv', mode='a',header=False,index=True)def groupby(): src = 'D:\学习文件\项目文件\规土委\data\od\save\save\\' data = pd.read_csv(src+'workFromCommunityToMall.csv',sep=',',names=['SQCODE','mall_name','pop']) group = data.groupby(['SQCODE','mall_name']).agg({ 'pop':sum}) csv = group['pop'].apply(lambda x: int(x / 6)) csv.to_csv(src+'workCommunityToMall'+'.csv',header=True) data = pd.read_csv(src+'holidayFromCommunityToMall.csv',sep=',',names=['SQCODE','mall_name','pop']) group = data.groupby(['SQCODE','mall_name']).agg({ 'pop':sum}) csv = group['pop'].apply(lambda x: int(x / 7)) csv.to_csv(src+'holidayCommunityToMall'+'.csv',header=True) data = pd.read_csv(src+'weekendFromCommunityToMall.csv',sep=',',names=['SQCODE','mall_name','pop']) group = data.groupby(['SQCODE','mall_name']).agg({ 'pop':sum}) group.to_csv(src+'weekendCommunityToMall'+'.csv',header=True)if __name__ == '__main__': groupby() root = 'D:\学习文件\项目文件\规土委\data\od\other\\' save = 'D:\学习文件\项目文件\规土委\data\od\comTomall\\' grid = 'D:\学习文件\项目文件\规土委\data\od\YGA\\' community_file = 'com_grid.txt' community = pdReadCsv(grid + community_file, sep=',') mall = pd.read_csv(grid + 'mall_grid.txt', sep=',', dtype=str) # test() for dir in eachFile(root): # read_file(root + '000054_0_unholiday') read_file(root + dir)
转载地址:https://data-mining.blog.csdn.net/article/details/111612516 如侵犯您的版权,请留言回复原文章的地址,我们会给您删除此文章,给您带来不便请您谅解!
发表评论
最新留言
路过,博主的博客真漂亮。。
[***.116.15.85]2024年05月02日 13时59分18秒
关于作者
喝酒易醉,品茶养心,人生如梦,品茶悟道,何以解忧?唯有杜康!
-- 愿君每日到此一游!
推荐文章
eclipse4.2版本下面安装ADT,安装已经完成了,但没有ADT的那个图标显示
2019-04-30
svn快速教程
2019-04-30
xset使用详解
2019-04-30
浅议Unix的defunct进程(“僵尸”进程)
2019-04-30
Visual Assist X的安装路径问题
2019-04-30
终端异常退出后,后台进程不关闭的解决办法
2019-04-30
Linux系统忘记root密码
2019-04-30
Linuxshell脚本在windows下编辑后执行出错
2019-04-30
硬链接不能跨分区的错误
2019-04-30
关于窗口Qt线程停止的问题
2019-04-30
centos NTP服务器配置总结
2019-04-30
QT 容器类之关联存储容器
2019-04-30
windows虚拟机搭建Qt开发环境之IOS
2019-04-30
Redhat安装Mplayer问题汇总
2019-04-30
查看linux是32位还是64位
2019-04-30
ffmpeg
2019-04-30
XCode编译器介绍
2019-04-30
X86汇编语言从实模式到保护模式14:用户程序编程接口及其实现
2019-04-30
SystemC自带example的simple_perf研习
2019-04-30